I'm creating an API for submitting incoming email messages for internal processing. The mail server script will submit them in a simple format like, pretty much exactly as received:
POST /api/messages/
{
"sender": "sender#...",
"recipient": "recipient#...",
"email_message": "headers\nbody"
}
However upon submission the email_message is parsed, some fields are extracted, message body is parsed, etc, and what we actually store as the entity is this:
GET /api/messages/1/
{
"sender": "sender#...",
"recipient": "{our internal recipient ID}"
"subject": "Subject from email",
"date": "Date from email",
"parsed_body": "Output of some magic performed on the Email body",
... etc ...
}
As you can see this is quite different from what was submitted via POST in the first place.
Is this transformation permitted under the REST rules or should the entity be stored (and retrieved) as originally POSTed? And if it should be stored as is then what kind of API endpoint should I provide for submitting the unparsed messages?
Is this transformation permitted under the REST rules or should the entity be stored (and retrieved) as originally POSTed?
For POST, it's fine to store whatever makes sense. Think about how it works on the web; we fill out a form, which submits as a bunch of key value pairs, but the server makes no promise that it is going to store key value pairs.
PUT semantics are different; the client expects the new representation to overwrite the existing representation (if any), whatever that means. So you need to be more aware of the implications of those semantics, especially in regards to what assumptions the client is allowed to make as a consequence of various responses.
POST semantics are much more forgiving (and consequently, the client has less information available about what's going on; that's part of the tradeoff).
Related
I am writing a JSON RPC conform js library. Are clients allowed by the specification to add custom fields (e.g. headers) along the params field to Request objects?
E.g.
{"jsonrpc": "2.0", "method": "subtract", "params": [42, 23], "headers": { "original-client": 12 }, "id": 1}
The spec (https://www.jsonrpc.org/specification#request_object) does not mention anything about adding additional fields.
Does my library violate the spec if it does add an header field?
Same question here. (I want to add user auth info / message signature...)
You are right in that the Specification does not mention anything about additional fields. So legally-speaking, if it is not forbidden, it is allowed.
On the other hand, I found at least one reference implementation here: www.simple-is-better.org/rpc/jsonrpc.py, which will hit you with an exception if a decoded message contains extra fields. This code is from one of the authors of the JSON-RPC 2.0 specs, Roland Köbler.
So, YMMV.
If your library is only an internal building block for a proprietary system, go ahead.
If you need to interface with other JSON-RPC implementations, test each one.
If the library is intended for public use, don't do it.
Edit: I solved this using the following protocol:
Before the "payload" message, send a standard-compliant notification with method "rpc.secinfo". (The rpc. prefix is per-spec reserved for extensions)
The "params" field contains whatever metadata applies (e.g. username, signature)
Example: {"jsonrpc": "2.0", "method": "rpc.secinfo", "params": {"user":"root", "signature":"0xDEADBEEF"}}<DELIM><payload><DELIM>
If header data is empty, No special method is pretended (i.e. behaves exactly like standard json-rpc).
The receiver can decode the header message just like any other call, and just needs to apply (check sig, whatever) the header onto the following message.
Discussion:
allows framing without touching payload
allows decoding the header without decoding payload
allows using byte-payload as is, particularly allows encrypted+literal payload to coexist (however encrypted payload breaks JSON-RPC compat!)
"Unaware" receiver: will throw the rpc.secinfo calls away silently or loudly, but is able to operate. Missing ID indicates a notification, i.e. receiver will not send response back per JSON-RPC spec.
"Unaware" sender: Received messages are valid, no-header messages.
Feel free to reuse this protocol, but please credit this SO answer.
I'm working on a RESTful HTTP API for an embedded hardware device. In this API, hardware components are typically represented in a URI hierarchy of API Resources, which then have child JSON objects with attributes/fields specific to that hardware "object". To control/modify the hardware, a HTTP PUT request is sent with a content-body of JSON object containing desired fields to be changed.
My question is about the order of fields within a JSON request body. Let's say we are changing multiple fields in the same JSON object. Since a JSON object is, by definition, an "unordered collection of zero or more name/value pairs", this implies that the order of applying those hardware changes is also unordered/undefined, which might not be good.
For example, what if "setting1" must be configured before "state" is set? For example, the following might be sent as the content-body:
{
"setting1": 1234,
"state": "on",
}
But if we send the following instead, what happens? (Notice the "state" field occurs first.)
{
"state": "on",
"setting1": 1234,
}
The problem is, by default the context of the JSON Object is "unordered", so the behavior is uncertain.
If it could be established (perhaps through API documentation) that the order of JSON fields in a request is significant, would this be considered a violation of best-practices in the context of JSON and RESTful APIs? How have others dealt with this? I would be surprised if this has not been discussed already, but I have not been able to find anything.
I'm not certain, but I think that trying to impose an ordering restriction on clients is going to be problematic for some of them. I would bet that not all frameworks/JSON libraries will respect an ordering, at least not by default. If you control the client, this may not be a big deal, but it sounds like you don't.
Strictly speaking, you would have to send multiple PUT requests in order to ensure the updates happen in the correct order. That's the easiest to implement, but also the noisiest. Another option would be to instead support a PATCH call to the endpoint using the RFC 6902 format. That will let you control the order in which changes occur, but your clients need to build out PATCHes. You could also support a POST if neither of those appeal to you.
I was reading this Questions regarding REST
What exactly is RESTful programming?
While reading i get that the client is independent of server and client don't need to construct anything.
I want to know that when we are building forms like user registration . Then what is the REST way of doing it.
I mean when i do GET for /user/new then
Does the server has to send the complete FORM in html
Only send fields in JSON and form is constructed by client itself
But then again there will be many complexities, if i just send the fields, then what things like
Hidden fields
Default value for select boxes
what about some logic like this field can'r be greater than 30 etc
REST is, as you're already aware, a way of communicating between a client and a server. However, the issue here is what is being defined as the "client". Personally, I tend to consider that the browser itself is not in itself the client: instead, the client is written in JavaScript, and the browser is merely a conduit to executing it.
Say for the sake of argument that you wish to view the details of user '1414'. The browser would be directed to the following location:
/UserDetails.html#1414
This would load the static file ViewUser.html, containing all the form fields that may be necessary, as well as (via a <script> tag) your JavaScript client. The client would load, look at the URL and make a RESTful call to:
GET /services/Users/1414
which would send back JSON data relating to that user. When the user then hits "save", the client would then make the following call:
PUT /services/Users/1414
to store the data.
In your example, you wanted to know how this would work with a new user. The URL that the browser would be directed to would be:
/UserDetails.html#0
(or #new, or just # - just something to tell the JavaScript that this is a new client. This isn't a RESTful URL so the precise details are irrelevant).
The browser would again load the static file ViewUser.html and your JavaScript client, but this time no GET would be made on the Users service - there is no user to download. In addition, when the user is saved, this time the call would be:
POST /services/Users/
with the service ideally returning a 302 to /services/Users/1541 - the location of the object created. Note that as this is handled in the client not the browser, no actual redirection occurs.
"Forms" for hypermedia APIs could be rendered in a "forms aware" media type like for instance Mason (https://github.com/JornWildt/Mason), Hydra (http://www.markus-lanthaler.com/hydra/) or Sirene (https://github.com/kevinswiber/siren). In Mason (which is my project) you could have a "create user" action like this:
{
"#actions": {
"create-user": {
"type": "json",
"href": "... URL to resource accepting the POST ...",
"method": "POST",
"title": "Create new user",
"schemaUrl": "... Optional URL to JSON schema definition for input ..."
"template": {
"Windows Domain": "acme"
}
}
}
}
The client can GET a resource that include the above action, find it be the name "create-user" and in this way be told which method to use, where to apply it, how the payload should be formated (in this case its JSON as described by an external schema definition) and some default values (the "template" object).
If you need more complex descriptions (like selection lists and validation rules as you mention) then you are on your own and will have to encoded that information in your own data - or use HTML or XForms.
There are multiple ways to do what you want.
You can use GET for /user/new along with a create-form link relation to get a single link. This can in plain HTML or HTML fragment, or a schema description, practically anything you want (the result will be less reusable than the other solutions).
You can use a standard MIME type which supports form descriptions. For example HAL with a form extension or collection+json.
You can use an RDF format, like JSON-LD with a proper vocab like Hydra.
I have a collection resource called Columns. A GET with Accept: application/json can't directly return a collection, so my representation needs to nest it in a property:-
{ "propertyName": [
{ "Id": "Column1", "Description": "Description 1" },
{ "Id": "Column2", "Description": "Description 2" }
]
}
Questions:
what is the best name to use for the identifier propertyName above? should it be:
d (i.e. is d an established convention or is it specific to some particular frameworks (MS WCF and MS ASP.NET AJAX ?)
results (i.e. is results an established convention or is it specific to some particular specifications (MS OData)?)
Columns (i.e. the top level property should have a clear name and it helps to disambiguate my usage of generic application/json as the Media Type)
NB I feel pretty comfortable that there should be something wrapping it, and as pointed out by #tuespetre, XML or any other representation would force you to wrap it to some degree anyway
when PUTting the content back, should the same wrapping in said property be retained [given that it's not actually necessary for security reasons and perhaps conventional JSON usage idioms might be to drop such nesting for PUT and POST given that they're not necessary to guard against scripting attacks] ?
my gut tells me it should be symmetric as for every other representation but there may be prior art for dropping the d/*results** [assuming that's the answer to part 1]*
... Or should a PUT-back (or POST) drop the need for a wrapping property and just go with:-
[
{ "Id": "Column1", "Description": "Description 1" },
{ "Id": "Column2", "Description": "Description 2" }
]
Where would any root-level metadata go if one wished to add that?
How/would a person crafting a POST Just Know that it needs to be symmetric?
EDIT: I'm specifically interested in an answer that with a reasoned rationale that specifically takes into account the impacts on client usage with JSON. For example, HAL takes care to define a binding that makes sense for both target representations.
EDIT 2: Not accepted yet, why? The answers so far don't have citations or anything that makes them stand out over me doing a search and picking something out of the top 20 hits that seem reasonable. Am I just too picky? I guess I am (or more likely I just can't ask questions properly :D). Its a bit mad that a week and 3 days even with an )admittedly measly) bonus on still only gets 123 views (from which 3 answers ain't bad)
Updated Answer
Addressing your questions (as opposed than going off on a bit of a tangent in my original answer :D), here's my opinions:
1) My main opinion on this is that I dislike d. As a client consuming the API I would find it confusing. What does it even stand for anyway? data?
The other options look good. Columns is nice because it mirrors back to the user what they requested.
If you are doing pagination, then another option might be something like page or slice as it makes it clear to the client, that they are not receiving the entire contents of the collection.
{
"offset": 0,
"limit": 100,
"page" : [
...
]
}
2) TBH, I don't think it makes that much difference which way you go for this, however if it was me, I probably wouldn't bother sending back the envelope, as I don't think there is any need (see below) and why make the request structure any more complicated than it needs to be?
I think POSTing back the envelope would be odd. POST should let you add items into the collection, so why would the client need to post the envelope to do this?
PUTing the envelope back could make sense from a RESTful standpoint as it could be seen as updating metadata associated with the collection as a whole. I think it is worth thinking about the sort of meta data you will be exposing in the envelope. All the stuff I think would fit well in this envelope (like pagination, aggregations, search facets and similar meta data) is all read only, so it doesn't make sense for the client to send this back to the server. If you find yourself with a lot of data in the envelope that the client is able to mutate - then it could be a sign to break that data out into a separate resource with the list as a sub collection. Rubbish example:
/animals
{
"farmName": "farm",
"paging": {},
"animals": [
...
]
}
Could be broken up into:
/farm/1
{
"id": 1,
"farmName": "farm"
}
and
/farm/1/animals
{
"paging": {},
"animals": [
...
]
}
Note: Even with this split, you could still return both combined as a single response using something like Facebook's or LinkedIn's field expansion syntax. E.g. http://example.com/api/farm/1?field=animals.offset(0).limit(10)
In response, to your question about how the client should know what the JSON payload they are POSTing and PUTing should look like - this should be reflected in your API documentation. I'm not sure if there is a better tool for this, but Swagger provides a spec that allows you to document what your request bodies should look like using JSON Schema - check out this page for how to define your schemas and this page for how to reference them as a parameter of type body. Unfortunately, Swagger doesn't visualise the request bodies in it's fancy web UI yet, but it's is open source, so you could always add something to do this.
Original Answer
Check out William's comment in the discussion thread on that page - he suggests a way to avoid the exploit altogether which means you can safely use a JSON array at the root of your response and then you need not worry about either of you questions.
The exploit you link to relies on your API using a Cookie to authenticate a user's session - just use a query string parameter instead and you remove the exploit. It's probably worth doing this anyway since using Cookies for authentication on an API isn't very RESTful - some of your clients may not be web browsers and may not want to deal with cookies.
Why Does this fix work?
The exploit is a form of CSRF attack which relies on the attacker being able to add a script tag on his/her own page to a sensitive resource on your API.
<script src="http://mysite.com/api/columns"></script>
The victims web browser will send all Cookies stored under mysite.com to your server and to your servers this will look like a legitimate request - you will check the session_id cookie (or whatever your server-side framework calls the cookie) and see the user is authenticated. The request will look like this:
GET http://mysite.com/api/columns
Cookie: session_id=123456789;
If you change your API you ignore Cookies and use a session_id query string parameter instead, the attacker will have no way of tricking the victims web browser into sending the session_id to your API.
A valid request will now look like this:
GET http://mysite.com/api/columns?session_id=123456789
If using a JavaScript client to make the above request, you could get the session_id from a cookie. An attacker using JavaScript from another domain will not be able to do this, as you cannot get cookies for other domains (see here).
Now we have fixed the issue and are ignoring session_id cookies, the script tag on the attackers website will still send a similar request with a GET line like this:
GET http://mysite.com/api/columns
But your server will respond with a 403 Forbidden since the GET is missing the required session_id query string parameter.
What if I'm not authenticating users for this API?
If you are not authenticating users, then your data cannot be sensitive and anyone can call the URI. CSRF should be a non-issue since with no authentication, even if you prevent CSRF attacks, an attacker could just call your API server side to get your data and use it in anyway he/she wants.
I would go for 'd' because it clearly separates the 'envelope' of your resource from its content. This would also make it easier for consumers to parse your responses, as opposed to 'guessing' the name of the wrapping property of a given resource before being able to access what it holds.
I think you're talking about two different things:
POST request should be sent in application/x-www-form-urlencoded. Your response should basically mirror a GET if you choose to include a representation of the newly created resource in your reply. (not mandatory in HTTP).
PUTs should definitely be symmetric to GETs. The purpose of a PUT request is to replace an existing resource representation with another. It just makes sense to have both requests share the same conventions, doesn't it?
Go with 'Columns' because it is semantically meaningful. It helps to think of how JSON and XML could mirror each other.
If you would PUT the collection back, you might as well use the same media type (syntax, format, what you will call it.)
I see advantages of getting JSON response and formatting in client side but are there any advantages by using JSON for form submission compared to normal submission?
One thing comes to mind when dealing with POST data is useless repetition:
For example, in POST we have this:
partners[]=Apple&partners[]=Microsoft&partners[]=Activision
We can actually see, that there is a lot of repetition here, where as when we are sending JSON:
{"partners":["Apple","Microsoft","Activision"]}
59 characters versus 47. In this small sample it looks miniscule, but these savings could go up and up, and even several bytes will save you some data. Of course, there is the parsing of data on server side, which could even out the differences, but still, i saw this sometimes helping when dealing with slower connections (looking at you, 3G and EDGE).
I didn't see any mention of file submission so I thought I'd chime in. FormData seems to be the standard way of submitting files to a server over AJAX. I didn't come across any solutions using JSON but there might be a way to serialize/deserialize files ...
It probably depends on your server side application. You are usually posting data to servers using POST, so how do you format underline data depends on how do you want for your server to process it. POST provides some form of key->value protocol, while in JSON you can put more than that. You can also transfer json using GET by placing it in url.
You must look on json as a way how data is written, while normal submission with POST should give you just a way how you transport data(of course you can abuse key->value feature of it for ordering your data).
There exists protocols on top of HTTP, that could help you define interface to your web application. One good example is RESTfull http://en.wikipedia.org/wiki/Representational_state_transfer
Specifically for submitting forms, I don't see any advanatages, POST was designed for this in a first place. There are cases where you want to transmit not only data from form, but also some metadata in this case json might help you by encoding form data(with metadata) in some json format, but at the end you will be still abusing POST for transferig this json data.
Hope I answered your question.
I don't see any apparent advantages for basic form submission. But when it comes to handling complex structures you'll start to realize the advantage of organizing your data.
So if you have a simple contact form (name, email, message) stick with normal form POSTing. But think about submitting a complete user's CV for example, it's very annoying to handle the massive amount of variables in your server-side script.
Here's an example for using JSON with PHP
//Here are the submission data
{
"personalInformation": {
"name": "hey",
"age": "20"
},
"education": {
"entry1": {
"type": "Collage",
"year": "2012"
},
"entry2": {
"type": "Highschool",
"year": "2010"
}
}
}
$CV_Data = json_decode($_POST['json_form'], true);
$CV_Data['personalInformation']['name'];
$CV_Data['personalInformation']['age'];
//Or you can loop
foreach($CV_Data['education'] as $entry){
$entry['type'];
$entry['year'];
}
As you can see, using JSON here makes it a lot easier for you to work on your data.
I am actually wrestling with the same problem. My use case requires that a potentially complex tree to be posted to the server. Some framework are able to decode two dimensional arrays as form attributes (Spring WebMVC is one I know of). Even so, this only helps you in the specific case when you are sending a nested array. The inherent nature of name-value-pair makes it unsuitable for transmitting a tree more than one level deep. In the past, I have used hacks like sending URL-encoded JSON as attribute value:
val0=%7B%22name%22%3A%22value%22%7D&val1=something&val2=something+else
However, this approach gets messy and difficult to debug when the object becomes more complex. In addition, many frameworks provide tools that automagically map JSON form post to objects (e.g.: Jackson for Java), it seems obtuse not to take advantage of these tools.
So ultimately, the choice rests on the complexity of the object you are sending. If the object is limited to one level deep, use straight name-value-pair; if the object is complex and involves a deeply-nested tree, use JSON.