REST (with JSON) vs SOAP security - json

This might be a silly question, but I got into a brief debate with two colleagues at work with regards to security as it relates to SOAP vs REST.
I am of the opinion that there is nothing inherently more secure when using SOAP.
Put another way, any security you can apply to a SOAP end point, can be applied to to a REST end point (and vice-verse).
Naturally it breaks down a bit when we move to the client side, where REST very probably has much more "client-client side" things, meaning, for example, JavaScript consumers and such. Security while sitting in the user's browser is of course a bit more of a ...challenge.
SO, can anybody provide a counter example perhaps?
Apologies if this should be directed to a security focused group - if that is the case, feel free to nuke the question.

Your colleagues have a point. REST only supports bindings with bearer tokens, where SOAP also supports so-called holder of key tokens. In the latter case, the client proves to the service it consumes that it requested the token by using the generated intermediate key to encrypt the message.
This is an extra protection against token theft.
See this article for more info: http://travisspencer.com/blog/2009/02/what-is-a-proof-key.html

Ok, from the SO link in the comments, the motor-cycle story seems to clear this up for me nicely.
In a nutshell: WS-Security (which is layered over SOAP) is a standard "thing" whereby the message body (your actual data in a request) can be fully, or partially, encrypted (secured) so that only the correct processor code can decrypt it. This is above and beyond and transport-layer security (SSL).
AFAIK, REST, as it stands today, does not have a similar standard. So you CAN implement similar security for your REST services, but YOU will have to do it. For most cases, REST over SSL is probably sufficient for most people (where most people are the bulk of normal user/consumers).
Bottom line, as I have it, there is still not anything security-wise that SOAP can do, that REST can't. The REST side might just require more work than the SOAP side.

Related

REST And Can Delete/Update etc actions

I am writing a REST API. However, one of the requirements is to allow the caller to determine if an action may be performed (so that, for example, a button can be enabled or disabled, etc.)
The action might not be allowed for several reasons - perhaps user permissions, but possibly because, for example, you can't delete a shared object, or you can't create an item with the same name as another item or an array of other business rules.
All the logic to determine if something can be deleted should be determined in the back end, but the front end must show this in the GUI.
I am trying to find the right pattern to use for this in REST, and am coming up a bit short. I could create a parallel API so for every entity endpoint there was an EntityPermissions endpoint, but that seems to be overkill. I could also do something like add an HTTP header that indicates that the request was only to check permisisons, not perform it, but that seems a bit dubious, and likely to mess up the http cache.
Can anyone point me to the common pattern for doing something like this? Does it have a name? Or a web page that discusses it? I'm sure everyone has their own ideas on this (like my dumb ideas) but I this seems to be a common enough requirement that I figure there must be a common pattern for it. But google didn't help much.
There's going to be multiple opinionated answers about this. I'll share mine. Might not be the best for your problem, but it's a valid solutions.
If you followed the real definition of REST, you would be building a hypermedia/HATEOAS-style webservice. Urls would not be hardcoded, they would be discovered and actions would be discovered by the existence of a link.
If an action may not be performed, you can just hide the link. If a user fetches the next resource they just see all the available actions right there.
A popular format for hypermedia API's is HAL. You might decorate the links further with more information from HTTP Link hints.
If this is the first time you heard of hypermedia API's, there might be a bit of a learning curve. The results of learning this can be very beneficial though.

Best practice for email links that will set a DB flag?

Our business wants to email our customers a survey after they work with support. For internal reasons, we want to ask them the first question in the body of the email. We'd like to have a link for each answer. The link will go to a web service, which will store the answer, then present the rest of the survey.
So far so good.
The challenge I'm running into: making a server-side changed based on an HTTP GET is bad practice, but you can't do a POST from a link. Options seem to be:
Use an HTTP GET instead, even though that's not correct and could cause problems (https://twitter.com/rombulow/status/990684453734203392)
Embed an HTML form in the email and style some buttons to look like links (likely not compatible with a number of email platforms)
Don't include the first question in the email (not possible for business reasons)
Use HTTP GET, but have some sort of mechanism which prevents a link from altering the server state more than once
Does anybody have any better recommendations? Googling hasn't turned up much about this specific situation.
One thing to keep in mind is that HTTP is specifying semantics, not implementation. If you want to change the state of your server on receipt of a GET request, you can. See RFC 7231
This definition of safe methods does not prevent an implementation from including behavior that is potentially harmful, that is not entirely read-only, or that causes side effects while invoking a safe method. What is important, however, is that the client did not request that additional behavior and cannot be held accountable for it. For example, most servers append request information to access log files at the completion of every response, regardless of the method, and that is considered safe even though the log storage might become full and crash the server. Likewise, a safe request initiated by selecting an advertisement on the Web will often have the side effect of charging an advertising account.
Domain agnostic clients are going to assume that GET is safe, which means your survey results could get distorted by web spiders crawling the links, browsers pre-loading resource to reduce the perceived latency, and so on.
Another possibility that works in some cases is to treat the path through the graph as the resource. Each answer link acts like a breadcrumb trail, encoding into itself the history of the clients answers. So a client that answered A and B to the first two questions is looking at /survey/questions/questionThree?AB where the user that answered C to both is looking at /survey/questions/questionThree?CC. In other words, you aren't changing the state of the server, you are just guiding the client through a pre-generated survey graph.

HTML- Required "bug" [duplicate]

Using a simple tool like FireBug, anyone can change javascript parameters on the client side. If anyone take time and study your application for a while, they can learn how to change JS parameters resulting in hacking your site.
For example, a simple user can delete entities which they see but are not allowed to change. I know a good developer must check everything on server side, but this means more overhead, you must do checks with data from a DB first, in order to validate the request. This takes a lot of time, for every action someone must validate it, and can only do this by fetching the needed data from DB.
What would you do to minimize hacking in that case?
A more simple way to validate is to add another parameter for every javascript function, this parameter must be a signature between previous parameters and a secret key.
How good sounds the solution above to you?
Our team use teamworkpm.net to organize our work. I just discovered that I can edit someone else tasks by changing a javascript function (which initially edit my own tasks).
when every function call to server, in server side before you do the action , you need to check if this user is allowed to do this action.
It is necessary to build server-side permissions mechanism to prevent unwanted actions, you may want to define groups of users, not individual user level, it makes it easier.
Anything on the client side could be spoofed. If you use some type of secret key + parameter signature, your signature algorithm must be sufficiently random/secure that it cannot be reverse engineered.
The overhead created with adding client side complexity is better spent crafting proper server side validations.
What would you do to minimize hacking in that case ?
You can't work around using validation methods on the server side.
A more simple way to validate is to add another parameter for every javascript function, this parameter must be a signature between previous parameters and a secret key.
How good sounds the solution above to you ?
And how do you use the secret key without the client seeing it? As you self mentioned before, the user easily can manipulate your javascript, and also he can read everything in javascript, the secret key, too!
You can't hide anything in JavaScript, the only thing you can do is to obscure things in JavaScript, and hope nobody tries to find out what you try to hide.
This is why you must validate everything on the server. You can never guarantee that the user won't mess about with things on the client.
Everything, even your javascript source code is visible to the client and can be changed by them, theres no way around this.
There's really no way to do this completely client-side. If the person has a valid auth cookie, they can craft any sort of request they want regardless of the code on the page and send it to your server. You can do things with other, encrypted cookies that must sent back with the request and also must match the inputs on the page, but you still need to check this server-side. Server-side security is essential in protecting your application from unauthorized access and you must ensure, server-side, that every action being performed is one that the user is authorized to perform.
You certainly cannot hide anything client side, so there is little point in trying to do so.
If what you are saying is that you are sending something like a user ID and you want to ensure that the returned value has not been illicitly changed then the simplest way of doing so it probably to generate and send a UUID alongside it, and check on return that the value of the uuid matched that stored on the server for the userID before doing any further processing. The space for uuid's is so large that you can discount any false hits ever occurring.
As to actual server side processing vulnerabilities:- you should simply always build in your security/permissions as close to the database as you can, and defiantly not in the client. There's nothing different in the scenario you outline from any normal client-server design.
Peter from Teamworkpm.net here - I'm one of the main developers and was concerned to come across this report about a security problem. I checked into this and I am happy that is not possible to delete a task that you shouldn't have access to.
You get a message saying "You do not have permission to delete this task".
I think it is just the confusion between being a Project Administrator and being an overall Administrator that is the problem here :- You may not be a member of a project but as an overall administrator, you still have permission to delete any task within your Teamwork site. This is by design.
We take security very seriously and it's all implemented server side because as Jens F says, we can't reply on client side security.
If you do come across any issues in TeamworkPM that you would like to discuss, we'd encourage any of you to just hit the feedback link and you'll typically get an answer within a few hours.

prevent SQL injection using html only

Am trying to validate the inputs for a comment box in order to accept only text, and to alert a message if the user entered number (1-0) or symbol (# # $ % ^ & * + _ = ). to prevent SQL injection
is there is a way to do that in html
You can never trust what comes from the client. You must always have a server side check to block something such as an SQL injection.
You can of course add the client side validation you mentioned but it's only to help users not enter junk data. Still can't trust it once it's sent to the server.
On using Javascript/HTML to improve security
is there is a way to do that in html
No. As others have pointed out, you cannot increase security by doing anything in your HTML or Javascript.
The reason is that the communication between your browser and your server is totally transparent to an attacker. Any developer is probably familiar with the "developer tools" in Firefox, Chrome etc. . Even those tools, which are right there in most modern browsers, are enough to create arbitrary HTML requests (even over HTTPS).
So your server must never rely on the validity of any part of the request. Not the URL, not the GET/POST parameters, not the cookies etc.; you always have to verify it yourself, serverside.
On SQL injection
SQL injection is best avoided by making sure never to have code like this:
sql = "select xyz from abc where aaa='" + search_argument + "'" # UNSAFE
result = db.execute_statement(sql)
That is, you never want to just join strings together to for a SQL statement.
Instead, what you want to do is use bind variables, similar to this pseudo code:
request = db.prepare_statement("select xyz from abc where aaa=?")
result = request.execute_statement_with_bind(sql, search_argument)
This way, user input is never going to be parsed as SQL itself, rendering SQL injection impossible.
Of course, it is still wise to check the arguments on the client-side to improve user experience (avoid the latency of a server roundtrip); and maybe also on server-side (to avoid cryptic error messages). But these checks should not be confused with security.
Short answer: no, you can't do that in HTML. Even a form with a single check box and a submit button can be abused.
Longer answer...
While I strongly disagree that there's nothing you can do in HTML and JavaScript to enhance security, a full discussion of that goes way beyond the bounds of a post here.
But ultimately you cannot assume that any data coming from a computer system you do not control is in any way safe (indeed, in a lot of applications you should not assume that data from a machine you do control is safe).
Your primary defence against any attack is to convert the data to a known safe format for both the sending and receiving components before passing it between components of your system. Here, we are specifically talking about passing data from your server-side application logic to the database. Neither HTML nor JavaScript are involved in this exchange.
Moving out towards the client, you have a choice to make. You can validate and accept/reject content for further processing based on patterns in the data, or you can process all the data and put your trust in the lower layers handling the content correctly. Commonly, people take the first option, but this gives rise to new security problem; it becomes easy to map out the defences and find any gaps. In an ideal world that would not matter too much - the deeper defences will handle the problem, however in the real world, developers are limited by time and ability. If it comes down to a choice of where you spend your skills/time budget, then the answer should always be on making the output safer over validating input.
The question seems to be more straight forward, but i keep my answer the same as before:
Never validate information on the clientside. It makes no sense, because you need to validate the same information with the same (or even better) methods on the server! Validating on Clientside only generates unnecessary overhead as the information from a client can not be trusted. Its a waste of energy.
If you have problems with users sending many different Symbols but no real messages, you should shut down your server immideately! Because this could mean that your users try to find a way to hack into the server to gain control!
Some strange looking special character combinations could allow this if the server doesn't escape user input properly!
In short:
HTML is made for content display, CSS for design of the content, Javascript for interactivity and other Languages like Perl, PHP or Python are made for processing, delivering and validation of information. These last called Languages normally run on a server. Even if you use them on a server you need to be very carefull, as there are possible ways to render them useless too. (For instance if you use global variables the wrong way or you dont escape user input properly.)
I hope this helps to get the right direction.

Is using HTML5 Server-sent-events (SSE) ReSTful?

I am not able to understand if HTML5s Server-sent-events really fit in a ReST architecture. I understand that NOT all aspects of HTML5/HTTP need to fit in a ReST architecture. But I would like to know from experts, which half of HTTP is SSE in (the ReSTful half or the other half !).
One view could be that it is ReSTful, because there is an 'initial' HTTP GET request from the client to the server and the remaining can just be seen as partial-content responses of just a different Content-type ("text/event-stream")
A request sent without any idea of how many responses are going to come as response(events) ? Is that ReSTful ?
Motivation for the question: We are developing the server-side of an app, and we want to support both ReST clients (in general) and Browsers (in particular). While SSEs will work for most of the HTML5 browser clients, we are not sure if SSEs are suitable for support by a pure ReST client. Hence the question.
Edit1:
Was reading Roy Fielding's old article, where he says :
"In other words, a single user request results in a potentially large number of server obligations. As such, a benevolent user can produce a disproportionate load on the publisher or broker that is distributing notifications. On the Internet, we don’t have the luxury of designing just for benevolent users, and thus in HTTP systems we call such requests a denial-of-service exploit.... That is exactly why there is no standard mechanism for notifications in HTTP"
Does that imply SSE is not ReSTful ?
Edit2:
Was going through Twitter's REST API.
While REST puritans might debate if their REST API is really/fully REST, just the title of the section Differences between Streaming and REST seems to suggest that Streaming (and even SSE) cannot be considered ReSTful !? Anyone contends that ?
I think it depends:
Do your server-side events use hypermedia and hyperlinks to describe possible state changes?
The answer to that question is the answer to whether or not they satisfy REST within your application architecture.
Now, the manner in which those events are sent/received may or may not adhere to REST - everything I have read about SSE suggests that they do not. I suspect it will impact several principles, especially layering - though if intermediaries were aware of the semantics of SSE you could probably negate this.
I think this is orthogonal as it's just part of the processing directive for HTML and JavaScript that the browser (via the JavaScript it is running) understands. You should still be able to have client-side application state decoupled from server-side resource state.
Some of the advice I've seen on how to deal with scaling using SSE don't fit REST - i.e. introducing custom headers (modifying the protocol).
How do you respect REST while using SSE?
I'd like to see some kind of
<link rel="event" href="http://example.com/user/1" />
Then the processing directives (including code-on-demand such as JavaScript) of whatever content-type/resource you are working with tell the client how to subscribe and utilize the events made available from such a hyperlink. Obviously, the data of those events should itself be hypermedia containing more hyperlinks that control program flow. (This is where I believe you make the distinction between REST and not-REST).
At some point the browser could become aware of that link relationship - just like a stylesheet and do some of that fancy wire-up for you, so all you do is just listen for events in JavaScript.
While I do think that your application can still fit a REST style around SSE, they are not REST themselves (Since your question was specifically about their use, not their implementation I am trying to be clear about what I am speaking to).
I dislike that the specification uses HTTP because it does away with a lot of the semantics and effectively tunnels an anemic protocol through an otherwise relatively rich one. This is supposedly a benefit but strikes me as selling dinner to pay for lunch.
ReST clients (in general) and Browsers (in particular).
How is your browser not a REST client? Browser are arguably the most REST client of all. It's all the crap we stick in to them via JavaScript that makes then stop adhering to REST. I suspect/fear that as long as we continue to think about our REST-API 'clients' and our browser clients as fundamentally different we will still be stuck in this current state - presumably because all the REST people are looking for a hyperlink that the RPC people have no idea needs to exist ;)
I think SSE can be used by a REST API. According to the Fielding dissertation, we have some architectural constraints the application MUST meet, if we want to call it REST.
client-server architecture: ok - the client triggers while the server does the processing
stateless: ok - we still store client state on the client and HTTP is still a stateless protocol
cache: ok - we have to use no cache header
uniform interface
identification of resources: ok - we use URIs
manipulation of resources through representations: ok - we can use HTTP methods with the same URI
self-descriptive messages: ok, partially - we use content-type header we can add RDF to the data if we want, but there is no standard which describes that the data is RDF coded. we should define a text/event-stream+rdf MIME type or something like that if that is supported.)
hypermedia as the engine of application state: ok - we can send links in the data
layered system: ok - we can add other layers, which can transform the data stream aka. pipes and filters where the pump is the server, the filters are these layers and the sink is the client
code on demand: ok - optional, does not matter
Btw. there is no such rule, that you cannot use different technologies together. So you can use for example a REST API and websockets together if you want, but if the websockets part does not meet at least with the self-descriptive message and the HATEOAS constraints, then the client will be hard to maintain. Scalability can be another problem, since the other constraints are about that.