REST And Can Delete/Update etc actions - json

I am writing a REST API. However, one of the requirements is to allow the caller to determine if an action may be performed (so that, for example, a button can be enabled or disabled, etc.)
The action might not be allowed for several reasons - perhaps user permissions, but possibly because, for example, you can't delete a shared object, or you can't create an item with the same name as another item or an array of other business rules.
All the logic to determine if something can be deleted should be determined in the back end, but the front end must show this in the GUI.
I am trying to find the right pattern to use for this in REST, and am coming up a bit short. I could create a parallel API so for every entity endpoint there was an EntityPermissions endpoint, but that seems to be overkill. I could also do something like add an HTTP header that indicates that the request was only to check permisisons, not perform it, but that seems a bit dubious, and likely to mess up the http cache.
Can anyone point me to the common pattern for doing something like this? Does it have a name? Or a web page that discusses it? I'm sure everyone has their own ideas on this (like my dumb ideas) but I this seems to be a common enough requirement that I figure there must be a common pattern for it. But google didn't help much.

There's going to be multiple opinionated answers about this. I'll share mine. Might not be the best for your problem, but it's a valid solutions.
If you followed the real definition of REST, you would be building a hypermedia/HATEOAS-style webservice. Urls would not be hardcoded, they would be discovered and actions would be discovered by the existence of a link.
If an action may not be performed, you can just hide the link. If a user fetches the next resource they just see all the available actions right there.
A popular format for hypermedia API's is HAL. You might decorate the links further with more information from HTTP Link hints.
If this is the first time you heard of hypermedia API's, there might be a bit of a learning curve. The results of learning this can be very beneficial though.

Related

REST: Creating a Json based Query: Which http method to use?

I have a question to RESTful services. In REST the POST method is used to create an entity.
And GET is used to query entities. Right?
As I read in another posts it is not allowed in HTTP to send a GET request with a body.
But when I want to send Json to make a query, what is the best way? Are there any best practices or how do you solve such json queries?
Thanks for your answers
In REST the POST method is used to create an entity. And GET is used to query entities. Right?
Not really. GET is used to fetch representations of resources. POST is deliberately vague -- anything not worth standardizing can use POST.
when I want to send Json to make a query, what is the best way?
There is no best way to do it, just trade offs.
The basic plot of HTTP is that you GET representations of resources. If the resource you want doesn't exist, you create a new one. So the "REST" flow would look something like sending a request to the server to create a "the answer to my query" resource, and then using GET to obtain the current representation of that resource. Which is great, because we can fetch the latest representation of that resource any time we're worried that our copy is out of date. Other people with the same query can use the same resource, so we can use a general-purpose cache to take a lot of the work. The end result is "web scale".
OK, not that great, because we learned that sending information over insecure channels is a bad idea; but we can put a general-purpose caching proxy in front of our server, and get some scale that way.
But "create a new resource" is a lot of ceremony when you only expect to need the query once.
Creating a new resource was using POST in this situation anyway, so why not return a representation of the solution right away? And the answer is, go right ahead! that works great... but doesn't give you any cache support at all. You are effectively performing a remote call under the guise of modifying a resource.
Also, POST doesn't promise idempotent semantics -- on an unreliable network, requests can get lost, and general purpose components won't know that in this particular case it is harmless to just repeat the same request.
PUT has idempotent semantics... but it also has very specific opinions about the contents of the payload that don't match "query" at all.
You can dig through other standardized methods, but there aren't really any good fits. The only methods that are close are SEARCH and REPORT, which are coupled to WebDAV semantics.
You can invent your own non standard method; but general purpose components won't understand it.
You can standardize a new method with the semantics you need, but that's a lot of work.
Or you can just use POST.
Remember, the web took over the world using nothing more than GET and POST. So it's probably fine.

HTML- Required "bug" [duplicate]

Using a simple tool like FireBug, anyone can change javascript parameters on the client side. If anyone take time and study your application for a while, they can learn how to change JS parameters resulting in hacking your site.
For example, a simple user can delete entities which they see but are not allowed to change. I know a good developer must check everything on server side, but this means more overhead, you must do checks with data from a DB first, in order to validate the request. This takes a lot of time, for every action someone must validate it, and can only do this by fetching the needed data from DB.
What would you do to minimize hacking in that case?
A more simple way to validate is to add another parameter for every javascript function, this parameter must be a signature between previous parameters and a secret key.
How good sounds the solution above to you?
Our team use teamworkpm.net to organize our work. I just discovered that I can edit someone else tasks by changing a javascript function (which initially edit my own tasks).
when every function call to server, in server side before you do the action , you need to check if this user is allowed to do this action.
It is necessary to build server-side permissions mechanism to prevent unwanted actions, you may want to define groups of users, not individual user level, it makes it easier.
Anything on the client side could be spoofed. If you use some type of secret key + parameter signature, your signature algorithm must be sufficiently random/secure that it cannot be reverse engineered.
The overhead created with adding client side complexity is better spent crafting proper server side validations.
What would you do to minimize hacking in that case ?
You can't work around using validation methods on the server side.
A more simple way to validate is to add another parameter for every javascript function, this parameter must be a signature between previous parameters and a secret key.
How good sounds the solution above to you ?
And how do you use the secret key without the client seeing it? As you self mentioned before, the user easily can manipulate your javascript, and also he can read everything in javascript, the secret key, too!
You can't hide anything in JavaScript, the only thing you can do is to obscure things in JavaScript, and hope nobody tries to find out what you try to hide.
This is why you must validate everything on the server. You can never guarantee that the user won't mess about with things on the client.
Everything, even your javascript source code is visible to the client and can be changed by them, theres no way around this.
There's really no way to do this completely client-side. If the person has a valid auth cookie, they can craft any sort of request they want regardless of the code on the page and send it to your server. You can do things with other, encrypted cookies that must sent back with the request and also must match the inputs on the page, but you still need to check this server-side. Server-side security is essential in protecting your application from unauthorized access and you must ensure, server-side, that every action being performed is one that the user is authorized to perform.
You certainly cannot hide anything client side, so there is little point in trying to do so.
If what you are saying is that you are sending something like a user ID and you want to ensure that the returned value has not been illicitly changed then the simplest way of doing so it probably to generate and send a UUID alongside it, and check on return that the value of the uuid matched that stored on the server for the userID before doing any further processing. The space for uuid's is so large that you can discount any false hits ever occurring.
As to actual server side processing vulnerabilities:- you should simply always build in your security/permissions as close to the database as you can, and defiantly not in the client. There's nothing different in the scenario you outline from any normal client-server design.
Peter from Teamworkpm.net here - I'm one of the main developers and was concerned to come across this report about a security problem. I checked into this and I am happy that is not possible to delete a task that you shouldn't have access to.
You get a message saying "You do not have permission to delete this task".
I think it is just the confusion between being a Project Administrator and being an overall Administrator that is the problem here :- You may not be a member of a project but as an overall administrator, you still have permission to delete any task within your Teamwork site. This is by design.
We take security very seriously and it's all implemented server side because as Jens F says, we can't reply on client side security.
If you do come across any issues in TeamworkPM that you would like to discuss, we'd encourage any of you to just hit the feedback link and you'll typically get an answer within a few hours.

How to create Rest API with query paramters

I am new to programming and am trying to find some documentation on creating a GET REST API in php that can accept query paramters in the url. I have been looking for documentation and there seems to be a lot out there but not sure what I should focus on. I am looking for the API to take a url like this:
rest.php?format=json&action=courses
Once this url is posted I would like to make a query to my database to retreive the courses. When a url like this is make:
rest.php?format=json&action=students&course=id
I would like to query my database based on parameters set in the url. Can someone please point me in the right direction.
REST specifications are pretty vague in this regard.
However, if you have too many parameters then it is ok to have the method set as POST instead as per best practices.
There are many aspects that influence the way REST is implemented in the application. Totally depends upon the usecases and the way developer wants it to be, still there are certain REST Standards which will help you to keep your application adhere quality checks.
REST STANDARDS
Hope you can use this to relate to your scenario.
In the case of query params, they are usually recommended when you can take the risk of exposing the information and is usually used with GET calls. Instead you can use POST call and define the information in payload which can accommodate more data as well there is no risk of exposing information which you don't want.
Hope this clarifies the issue, if not please feel free to ask

How do I prevent my HTML from being exploitable while avoiding GUIDs?

I've recently inherited a ASP.NET MVC 4 code base. One problem I noted was the use of some database ids (ints) in the urls as well in html form submissions. The code in its present state is exploitable through both URL tinkering and creating custom HTML posts with different numbers.
Now while I can easily fix the URL problems by using session state or additional auth checks i'm less sure about the database ids that get embedded into the HTML that the site spits out (i.e. I give them a drop down to fill). When the ids come back in a post how can I be sure I put them there as valid options?
What is considered "best practice" in terms of addressing this problem?
While I appreciate I could just "GUID it up" I'm hesitant to do so because I find them a pain in the ass to work with when debugging databases.
Do I have a choice here? Must I GUID to prevent easy guessing of ids or is there some kind of DRY mechanism I can use to validate the usage of ids as they come back into the site?
UPDATE: A commenter asked about the exploits I'm expecting. Lets say I spit out a HTML form with a drop down list of all the locations one can import "treasure" from. The id of the locations that the user owns are 1,2 and 3, these are provided in the HTML. But the user examines the html, fiddles with it and decides to put together a POST with the id of 4 selected. 4 is not his location, its someone else's.
Validate the ID passed against the IDs the user can modify.
It may seem tedious, but this is really the only way to make sure the user has access to what they're trying to modify. Using GUIDs without validation is security by obscurity: sure guessing them is hard, but you can potentially guess them given enough resources.
You can do this at the top of the controller before you do anything else with the posted data. If there's a violation, just throw an exception and have your global exception handler deal with it; you don't need to handle it in a pretty way since you can safely assume that the user is tampering with data in an unsupported way.
The issue you describe is known as "insecure direct object references," and the OWASP group recommends two policies for dealing with this issue:
using session-based indirect object references, and
validating all accesses to object references.
An example of Suggestion #1 would be that instead of having dropdown options 1, 2, and 3, you assign each option a GUID that is associated with the original ID in a map in the user's session. When you get a POST from that user, you check to see what object the given ID was supposed to be tied to. OWASP's ESAPI has some libraries to help with this in various languages.
But in many cases Suggestion #1 is actually counterproductive. For example, in many cases you want to have URLs that can be copy/pasted from one user to another. Process #2 is generally seen as the most foolproof way to address this issue.
You are describing Broken Access Control with Insecure Ids. Once you've identified the threat and decided which Ids are owned by certain users, ensure checks are in place for this server side.

Which verb should be used for custom actions?

Let's say we have a site where we have a list of items. On each of these items you can start a couple of different process that will result in somekind of output related to the item in question. How should you design for the most appropriate use of the http verbs? What I would like to have is multiple links per item and each link trigger one of the actions, but in my scenario that doesn't match the HTTP-VERB get, which will be used if I am using links. On the other hand, I don't want to have buttons which all are in a separate form with different actions.
It's somewhat hard to explain but hopefully you understand, it should be some best practices to apply here.
You should NOT use GET. GET requests should be safe which means they are intended only for information retrieval and should not change the state of the server. (i.e. things like logging are OK, but things that actually update the state of the application are a no-no.) Think of a crawler going over your application. Anything you wouldn't mind a crawler going through is fine for GET, but that doesn't sound like your situation (because you said, "start a couple of different processes", but I could be misinterpreting your use case).
That leaves PUT, DELETE and POST. PUT and DELETE must be idempotent, meaning that multiple identical requests should have the same effect as a single request. So if you had a request that updated a person's name, for example, if you called it once or 100 times, the person's name would still be the same, so it is idempotent.
POST is the most flexible verb. If the processes you are kicking off are not safe or idempotent (or even if they are) you can use POST, which simply doesn't guarantee anything about safety or idempotency. The disadvantages there are:
If you use POST when GET is more semantically correct, it is less communicative of the intent of your request, since POST usually means you are sending a payload.
You just couldn't take advantage of the web's caching infrastructure that makes it so scalable.
In the past, I have used POST with query args to specify custom actions. It made sense in my use case because I had a majority of custom actions needing to pass a payload. Since you do not want to use buttons, you can use GET with query args to specify the different actions, but you have to be very careful that the action you are taking does not have any side effects and is idempotent. As noted in the comment by #jhericks below, there are many things in the network that assume that GET's are safe and may repeat GET's.
From a pure RESTful perspective though, this is not ideal. Your items will have a specific URI and GET on the URI will return the items representation. Running actions on the item is effectively a change in the state of the item representation and that should be done with a POST(or a PUT depending on who you ask and if your web server supports PUT). In real life though, using query args is an easy work around and it may make sense to your use case.
Im not sure i fully understand your question.
But here's a quick paragraph which might help you.
REST is about making smart clients and simple servers. GET, PUT, DELETE represent the basic operations of file access at the lowest level. What you should be doing is completely ignoring anything the server can offer and be offloading that work onto clients.
So, the question is, why is the server being triggered to do many things. why can't the client do all of these things itself.
Mike Brown