oAuth 2 server side vs client side - language-agnostic

I'm trying to wrap my head around oauth2 and am comparing the server and client side flows. To me the server side flow sounds much more simpler - the user authorizes once and then everything remains on the server (converting the code to an access token, requests to the remote api, etc).
So, why would someone want to use the client-side flow?
One possible answer to that might be to reduce server traffic. Does anyone have any evidence that client-side reduces a significant amount of traffic from the server?

I think that it would be unlikely for approvals and access token grants to make up any sort of significant fraction of a server's traffic load unless it's implemented in a very obscure way. One might use the client-side flow if the application is very javascript-centric and has no other reason to contact a web server specifically for its service. For example, you could imagine some browser extension written in javascript that uses OAuth 2 to request someone's favorite YouTube videos, Facebook friends, or some other data, and display it to the user in some fashion. It may not make sense to dedicate a web server for serving those grants if it would perform no other function for the application.

Related

Transmitting Data Between Websites Via The Internet

I've edited this question. I hope this version is a bit more clear.
I am seeking to have a programmer build a process for me. I need to ensure what is recommended is a best practice for the below process.
Here are the steps I need to have built:
Have a https: webform on my server that submits client inputted data into a database on my server. The data is personal identifiable information and needs to be securely transmitted in the next step.
Once the data is loaded in my database, I need to transfer the data in an encrypted/Json format to a third-party server. The third-party will unencrypt the data, score it and send it back to my server encrypted.
While the data is being sent and scored by the third-party, the client will see a browser screen indicating processing...
Once the data is scored and sent back to my server, it will be unencrypted and it will update the client's browser with options based on the score given by the third-party.
Based on what I understand, I think an API on both my server and the third-party server might be best.
What is the best practice approach for the above process?
Below are some questions I have which would be very helpful for me to understand in your response.
Is the API approach the best?
What process is used by the third-party to unencrypt data I send and vice versa? How do I prevent others from unencrypting the data if it is intercepted?
3)While the data is being scored by the third-party, the client browser will show processing. From a web development standpoint how does this work? Also, from a web development standpoint, how exactly is the processing screen triggered to update with results on the client's browser screen when the data is sent back from the third-party?
The file that you will be transmitting, as you mentioned is encrypted so it will totally depend on the encryption algorithm you are using, generally encrypted data are stored as BASE64 or HEX so after encryption the data will be passed in the above-mentioned format.
To answer you second question on "how will the receiving website receive the file?", there are several ways you can do this:
You can share the backend database your website is using then it will just be a simple query away (by shared I mean both the websites use the same database).
Another way of achieving this is to use an API which can store your data and can be globally used in any application it is called at
Or you can set up a simple php server locally at your machine and send data between websites using the HTTP: GET or HTTP: POST requests.
also avoid using un-necessary tags like web-development-server or data-transfer or transmission etc. these tags are useless and unrelated to your question. You should only tag those which are related to your question, a simple tag for web-development would be enough.
also edit out your question to make us properly understand, what problems you are facing? what have you tried? what do you expect from us in the answer?
please clarify your question more.
Your concept of files being sent around is kind of wrong, because in most cases none of this is ever been written to disk, and so there is no JSON file with a file-name - and these are not directly being encrypted, but only pushed through an encrypted channel. Most commonly both sides either use HTTPS or WSS as the protocol, which encrypts / decrypts the data being exchanged transparently (all by itself). And depending on the protocol which is being used, this requires either a combination of client & server, server & server - or a P2P network - to be installed.
Further reading: Internetworking Basics - Computer and Information Science.

Real time communication between clients via websocket server on Google App Engine

This article describes how a websocket server for a chat application can look. We are planning to implement something similar; when a message is sent to the server it is sent to the correct recipient based on an authentication token and the message gets saved in a mysql database.
We will eventually host the server on Google App Engine, and I suspect that that will cause some issues with the above described approach, since that depends on all clients being connected to the same server, and that probably won't be the case since multiple instances will be created as needed. Is there a way to connect all instances so that this won't be a problem (Pub/Sub maybe? (That will cause additional costs though)), or should we find a different solution?
One idea I had was to use mysql-events to monitor the binlog from the websocket server for the creation of new rows in the messages table, but I read somewhere that that wasn't recommend. But I can't find where I read that, and maybe that is the best solution.
Since you asked about other solutions, I would recommend looking at Firebase and specifically the Realtime Database. Out of the box it provides all of the functionality that you need for realtime communication between connected clients and Cloud Messaging for clients who aren't.
Here's a tutorial that uses Firestore to create a realtime chat web app, but it can all be applied to the Realtime Database with minor modification. I say that because Firestore has expensive writes, which in my opinion make it unsuitable for a chat backend.

How to fix performance in AJAX based websites for spotty networks?

I've been traveling for the last couple weeks and have found a issue with the method that Ajax uses to construct a website. I understand that the webpage requesting only the pieces it needs is the most efficient method for the servers but when working in an environment where signal comes and goes or is being throttled by a provider, most websites running on this model become completely unresponsive and turn every interaction into a several minute wait.
In situations where the bandwidth is limited, the best performance generally comes from websites that have all of their content on one single page that is constructed for the user before it is sent. I understand that this is not the restful way but I was wondering if there was a middle ground to this solution.
Is there a way to batch many different AJAX calls where the user would only be sending one large call to the server which then the server would compile everything that is listed and then returns it in one heap? Or is this something that hasn't been formed into a standard yet and a custom server architecture would end up needing to do?
In a situation where bandwidth is extremely limited, everything you will try to do will be a pain.
Yes, in this scenario, frequently opening connections to the server through multiple requests (which is very typical of ajax single page applications) will make the experience worst than opening one single connection to the server.
However, you need to ask yourself if you want your web application to cater to clients with fast connections or to cater to clients with slow connections and design your web application accordingly. If you make it only to accommodate slow clients then the user experience for those with faster connections will suffer and vice versa.
You could also decide to cater to both audiences by creating a version for each but it's a lot of extra work
I have no idea what your web application does. But if it's to simply "view" data then perhaps you can get away with loading all the data from the start. However, if your web application contains a lot of data manipulation features then you have no choice, stick with Ajax and get a better internet connection.
If you want to batch your requests then your web application needs to be designed that way which would allow you to do everything you need to do on the client side before clicking on a "save" button that will gather all the changes you made and send it all in 1 request.
You should always build your web application according to your client's situation. If you're traveling a lot then that might be strictly your problem and won't ever be your client's problem. In this case, stick with ajax and get a better internet connection.
If the client is yourself then heck you could do whatever you want to ease your pain including loading everything from the get go.
Unfortunately there's no magic solution.
Hope it helps!

Transferring data over JSON securely

I've setup a web server and can exchange data between it and my iPhone by using JSON.
Is JSON already encrypted? I'm trying to make an app that people can use. I'm not sure how to securely verify a user. Right now I'm having them send some information that uniquely identifies them along with their GET requests.
But couldn't someone easily pick this up, and then replay the GET request to the server to access the same information?
What's the right way to do this?
JSON is not automagically encrypted, no.
Secure your server with SSH. This should prevent most MITM type attacks. If you are extremely worried about replay attacks from the client side (browser), you will probably need oAuth + a secure nonce.
No security measure will protect you 100%, you have to compromise security vs performance.
If you are worried about MITM attacks, most likely someone sniffing requests on your network and then replaying them, you could set up SSL and send the JSON request via that, which would prevent the attack.
The only other thing is that via GET your security variables will be exposed in the URL.
Whether it is ideal form is what kind of information you are transferring and what other authentication you are using.
http://joekuan.wordpress.com/2010/05/08/quick-steps-on-setting-up-apache-ssl-php-json-on-freebsd-8-0/

Simple, secure API authentication system

I have a simple REST JSON API for other websites/apps to access some of my website's database (through a PHP gateway). Basically the service works like this: call example.com/fruit/orange, server returns JSON information about the orange. Here is the problem: I only want websites I permit to access this service. With a simple API key system, any website could quickly attain a key by copying the key from an authorized website's (potentially) client side code. I have looked at OAuth, but it seems a little complicated for what I am doing. Solutions?
You should use OAuth.
There are actually two OAuth specifications, the 3-legged version and the 2-legged version. The 3-legged version is the one that gets most of the attention, and it's not the one you want to use.
The good news is that the 2-legged version does exactly what you want, it allows an application to grant access to another via either a shared secret key (very similar to Amazon's Web Service model, you will use the HMAC-SHA1 signing method) or via a public/private key system (use signing method: RSA-SHA1). The bad news, is that it's not nearly as well supported yet as the 3-legged version yet, so you may have to do a bit more work than you otherwise might have to right now.
Basically, 2-legged OAuth just specifies a way to "sign" (compute a hash over) several fields which include the current date, a random number called "nonce," and the parameters of your request. This makes it very hard to impersonate requests to your web service.
OAuth is slowly but surely becoming an accepted standard for this kind of thing -- you'll be best off in the long run if you embrace it because people can then leverage the various libraries available for doing that.
It's more elaborate than you would initially want to get into - but the good news is that a lot of people have spent a lot of time on it so you know you haven't forgotten anything. A great example is that very recently Twitter found a gap in the OAuth security which the community is currently working on closing. If you'd invented your own system, you're having to figure out all this stuff on your own.
Good luck!
Chris
OAuth is not the solution here.
OAuth is when you have endusers and want 3rd party apps not to handle end user passwords. When to use OAuth:
http://blog.apigee.com/detail/when_to_use_oauth/
Go for simple api-key.
And take additional measures if there is a need for a more secure solution.
Here is some more info, http://blog.apigee.com/detail/do_you_need_api_keys_api_identity_vs._authorization/
If someone's client side code is compromised, they should get a new key. There's not much you can do if their code is exposed.
You can however, be more strict by requiring IP addresses of authorized servers to be registered in your system for the given key. This adds an extra step and may be overkill.
I'm not sure what you mean by using a "simple API key" but you should be using some kind of authentication that has private keys(known only to client and server), and then perform some kind of checksum algorithm on the data to ensure that the client is indeed who you think it is, and that the data has not been modified in transit. Amazon AWS is a great example of how to do this.
I think it may be a little strict to guarantee that code has not been compromised on your clients' side. I think it is reasonable to place responsibility on your clients for the security of their own data. Of course this assumes that an attacker can only mess up that client's account.
Perhaps you could keep a log of what ip requests are coming from for a particular account, and if a new ip comes along, flag the account, send an email to the client, and ask them to authorize that ip. I don't know maybe something like that could work.
Basically you have two options, either restrict access by IP or then have an API key, both options have their positive and negative sides.
Restriction by IP
This can be a handy way to restrict the access to you service. You can define exactly which 3rd party services will be allowed to access your service without enforcing them to implement any special authentication features. The problem with this method is however, that if the 3rd party service is written for example entirely in JavaScript, then the IP of the incoming request won't be the 3rd party service's server IP, but the user's IP, as the request is made by the user's browser and not the server. Using IP restriction will hence make it impossible to write client-driven applications and forces all the requests go through the server with proper access rights. Remember that IP addresses can also be spoofed.
API key
The advantage with API keys is that you do not have to maintain a list of known IPs, you do have to maintain a list of API keys, but it's easier to automatize their maintenance. Basically how this works is that you have two keys, for example a user id and a secret password. Each method request to your service should provide an authentication hash consisting of the request parameters, the user id and a hash of these values (where the secrect password is used as the hash salt). This way you can both authenticate and restrict access. The problem with this is, that once again, if the 3rd party service is written as client-driven (for example JavaScript or ActionScript), then anyone can parse out the user id and secret salt values from the code.
Basically, if you want to be sure that only the few services you've specifically defined will be allowed to access your service, then you only option is to use IP restriction and hence force them to route all requests via their servers. If you use an API key, you have no way to enforce this.
All of production of IP's security seems produces a giant bug to users before getting connected. Symbian 60s has the fullest capability to left an untraced, reliable and secure signal in the midst of multiple users(applying Opera Handler UI 6.5, Opera Mini v8 and 10) along with the coded UI's, +completely filled network set-up. Why restrict for other features when discoverable method of making faster link method is finally obtained. Keeping a more identified accounts, proper monitoring of that 'true account'-if they are on the track-compliance of paying bills and knowing if the users has an unexpired maintaining balance will create a more faster link of internet signal to popular/signatured mobile industry. Why making hard security features before getting them to the site, a visit to their accounts monthly may erase all of connectivity issues? All of the user of mobile should have no capability to 'get connected' if they have unpaid bills. Why not provide an 'ALL in One' -Registration/Application account, a programmed fixed with OS, (perhaps an e-mail account) instead with a 'monitoring capability' if they are paying or not (password issues concern-should be given to other department). And if 'not' turn-off their account exactly and their other link features. Each of them has their own interests to where to get hooked daily, if you'd locked/turn them off due to unpaid bills that may initiate them to re-subscribe and discipline them more to become a more responsible users and that may even expire an account if not maintained. Monthly monitoring or accessing of an identified 'true account' with collaboration to the network provider produces higher privacy instead of always asking for users 'name' and 'password', 'location', 'permissions' to view their data services. IP's marked already their first identity or 'finding the location of the users' so, it's seems unnessary to place it on browsers pre-searches, why not use 'Obtaining data' or 'Processing data.'