Building an API: JSON Array also for one-element-arrays? - json

I am building an API that uses JSON for requests/responses. I want to be able to also receive bulk requests, i.e., JSON Arrays.
Right now, I have a solution which works fine if the JSON request is always wrapped in an array:
[
{"id":"AAAEEF", "value":"abc"}
]
works, also
[
{"id":"AAAEEF", "value":"abc"},
{"id":"AAAEF1", "value":"vbc"},
]
If one wants to request only a single id-value combination and thus requests
{"id":"AAAEEF", "value":"abc"}
the request fails.
My question: Is it acceptable for a "good" API to enforce wrapping all JSON requests in an array, even if they only have one element?
Thanks in advance for helping me out!

The key to writing a "good" API is to be consistent, and to document it well. Whatever choices you make next are yours to make - if you decide you have good reason to require all calls to your API should be wrapped in a thisIsAContainerObject element, by all means document it and release it.
For consistency it might even be explicitly better to always require an array. As long as you throw a proper error when more than one element is inserted.

Related

JSON response for REST API post for collections

I have found very little detail about best practices when responding to PUT or POST commands with a REST API.
Assume the example is that the API is for a list of movies in a movie store and has the following:
GET api/Movies
GET api/Movies/{id}
PUT api/Movies/
PUT api/Movies/{id}
POST api/Movies/
POST api/Movies/{id}
Where you can PUT or POST either single or collections. I included both because I do not want to get into a discussion about PUT vs. POST, and would like an answer on best practices, particularly in response to errors.
If working on a single item I can return HTTP status codes and a response easily, but what should be done when handling POST and PUT of collections, especially in a non-idempotent method?
My thought for returning a package would be as follows:
{
"version": "1.0"
"status": 200,
"errors": [
// List of object id's, and errors
]
"data": [
// List of movies POSTed or PUT
]
}
With the errors being generated for each specific ID that failed, but I'm not sure it passes the smell test in regards to overall status and HttpStatus. Should I return another status if a portion of the collection fails or a single entity fails?
Generally in REST a operation needs to completely succeed or completely fail. Operations like this should be atomic and idempotent.
So what you're asking is simply outside of what REST can do for you. From the horse's mouth:
"If you find yourself in need of a batch operation, then most likely you just haven’t defined enough resources."
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-743
So what does that quote mean? It doesn't mean that you can't have a resource representing the same data as several other resources (e.g.: your collection), but if you are using PUT to update it, you are still 100% replacing its contents. Not partially.

RESTful HTTP embedded hardware API - JSON field order significance

I'm working on a RESTful HTTP API for an embedded hardware device. In this API, hardware components are typically represented in a URI hierarchy of API Resources, which then have child JSON objects with attributes/fields specific to that hardware "object". To control/modify the hardware, a HTTP PUT request is sent with a content-body of JSON object containing desired fields to be changed.
My question is about the order of fields within a JSON request body. Let's say we are changing multiple fields in the same JSON object. Since a JSON object is, by definition, an "unordered collection of zero or more name/value pairs", this implies that the order of applying those hardware changes is also unordered/undefined, which might not be good.
For example, what if "setting1" must be configured before "state" is set? For example, the following might be sent as the content-body:
{
"setting1": 1234,
"state": "on",
}
But if we send the following instead, what happens? (Notice the "state" field occurs first.)
{
"state": "on",
"setting1": 1234,
}
The problem is, by default the context of the JSON Object is "unordered", so the behavior is uncertain.
If it could be established (perhaps through API documentation) that the order of JSON fields in a request is significant, would this be considered a violation of best-practices in the context of JSON and RESTful APIs? How have others dealt with this? I would be surprised if this has not been discussed already, but I have not been able to find anything.
I'm not certain, but I think that trying to impose an ordering restriction on clients is going to be problematic for some of them. I would bet that not all frameworks/JSON libraries will respect an ordering, at least not by default. If you control the client, this may not be a big deal, but it sounds like you don't.
Strictly speaking, you would have to send multiple PUT requests in order to ensure the updates happen in the correct order. That's the easiest to implement, but also the noisiest. Another option would be to instead support a PATCH call to the endpoint using the RFC 6902 format. That will let you control the order in which changes occur, but your clients need to build out PATCHes. You could also support a POST if neither of those appeal to you.

What are developers their expectations when receiving a JSON response from a server

I have java library that runs webservices and these return a response in XML. The webservices all revolve around giving a list of details about items. Recently, changes were made to allow the services to return JSON by simply converting the XML to JSON. When looking at the responses, I saw they're not as easy to parse as I thought. For example, a webservice that returns details about items.
If there are no items, the returned JSON is as follows:
{"ItemResponse":""}
If there is 1 item, the response is as follows (now itemResponse has a object as value instead of a string):
{"ItemResponse":{"Items":{"Name":"Item1","Cost":"$5"}}}
If there two or more items, the response is (now items has an array as value instead of an object):
{"ItemResponse":{"Items":[{"Name":"Item1","Cost":"$5"},{"Name":"Item2","Cost":"$3"}]}}
To parse these you need several if/else which I think are clunky.
Would it be an improvement if the responses were:
0 items: []
1 item: [{"Name":"Item1","Cost":"$5"}]
2 items: [{"Name":"Item1","Cost":"$5"},{"Name":"Item2","Cost":"$3"}]
This way there is always an array, and it contains the itemdata. An extra wrapper object is possible:
0 items: {"Items":[]}
1 item: {"Items":[{"Name":"Item1","Cost":"$5"}]}
2 items: {"Items":[{"Name":"Item1","Cost":"$5"},{"Name":"Item2","Cost":"$3"}]}
I'm not experienced in JSON so my question is, if you were a developer having to use these webservices, how would you expect the JSON resonse to be formatted? Is it better to always return a consistent array, even if there are no items or is this usually not important? Or is an array not enough and do you really expect a wrapper object around the array?
What are conventions/standards regarding this?
Don't switch result types, always return an array if there are more items possible. Do not mix, for 1 item an object for more an array. That's not a good idea.
Another best practise is that you should version your API. Use something like yoursite.com/api/v1/endpoint. If you don't do this and you change the response of your API. All your client apps will break. So keep this in mind together with documentation. (I've seen this happen a lot in the past..)
As a developer I personally like your second approach, but again it's a preference. There is no standard for this.
There are several reasons to use json:
much more dense and compact: thus data sent is less
in javascript you can directly access those properties without parsing anything. this means you could convert it into an object read the attributes (often used for AJAX)
also in java you usually don't need to parse the json by yourself - there are several nice libs like www.json.org/java/index.html
if you need to know how json is build ... use google ... there tons of infos.
To your actual questions:
for webservices you often could choose between xml and json as a "consumer" try:
https://maps.googleapis.com/maps/api/place/textsearch/json
and
https://maps.googleapis.com/maps/api/place/textsearch/xml
there is no need to format json visually - is it not meant for reading like xml
if your response doesn't have a result, json-service often still is giving a response text - look again at the upper google map links - those are including a response status which makes sense as it is a service.
Nevertheless it's the question if it is worth converting from xml to json if there isn't a specific requirement. As Dieter mentioned: it depends on who is already using this service and how they are consumed ... which means the surrounding environment is very important.

RESTful Collection Resources - idiomatic JSON representations and roundtripping

I have a collection resource called Columns. A GET with Accept: application/json can't directly return a collection, so my representation needs to nest it in a property:-
{ "propertyName": [
{ "Id": "Column1", "Description": "Description 1" },
{ "Id": "Column2", "Description": "Description 2" }
]
}
Questions:
what is the best name to use for the identifier propertyName above? should it be:
d (i.e. is d an established convention or is it specific to some particular frameworks (MS WCF and MS ASP.NET AJAX ?)
results (i.e. is results an established convention or is it specific to some particular specifications (MS OData)?)
Columns (i.e. the top level property should have a clear name and it helps to disambiguate my usage of generic application/json as the Media Type)
NB I feel pretty comfortable that there should be something wrapping it, and as pointed out by #tuespetre, XML or any other representation would force you to wrap it to some degree anyway
when PUTting the content back, should the same wrapping in said property be retained [given that it's not actually necessary for security reasons and perhaps conventional JSON usage idioms might be to drop such nesting for PUT and POST given that they're not necessary to guard against scripting attacks] ?
my gut tells me it should be symmetric as for every other representation but there may be prior art for dropping the d/*results** [assuming that's the answer to part 1]*
... Or should a PUT-back (or POST) drop the need for a wrapping property and just go with:-
[
{ "Id": "Column1", "Description": "Description 1" },
{ "Id": "Column2", "Description": "Description 2" }
]
Where would any root-level metadata go if one wished to add that?
How/would a person crafting a POST Just Know that it needs to be symmetric?
EDIT: I'm specifically interested in an answer that with a reasoned rationale that specifically takes into account the impacts on client usage with JSON. For example, HAL takes care to define a binding that makes sense for both target representations.
EDIT 2: Not accepted yet, why? The answers so far don't have citations or anything that makes them stand out over me doing a search and picking something out of the top 20 hits that seem reasonable. Am I just too picky? I guess I am (or more likely I just can't ask questions properly :D). Its a bit mad that a week and 3 days even with an )admittedly measly) bonus on still only gets 123 views (from which 3 answers ain't bad)
Updated Answer
Addressing your questions (as opposed than going off on a bit of a tangent in my original answer :D), here's my opinions:
1) My main opinion on this is that I dislike d. As a client consuming the API I would find it confusing. What does it even stand for anyway? data?
The other options look good. Columns is nice because it mirrors back to the user what they requested.
If you are doing pagination, then another option might be something like page or slice as it makes it clear to the client, that they are not receiving the entire contents of the collection.
{
"offset": 0,
"limit": 100,
"page" : [
...
]
}
2) TBH, I don't think it makes that much difference which way you go for this, however if it was me, I probably wouldn't bother sending back the envelope, as I don't think there is any need (see below) and why make the request structure any more complicated than it needs to be?
I think POSTing back the envelope would be odd. POST should let you add items into the collection, so why would the client need to post the envelope to do this?
PUTing the envelope back could make sense from a RESTful standpoint as it could be seen as updating metadata associated with the collection as a whole. I think it is worth thinking about the sort of meta data you will be exposing in the envelope. All the stuff I think would fit well in this envelope (like pagination, aggregations, search facets and similar meta data) is all read only, so it doesn't make sense for the client to send this back to the server. If you find yourself with a lot of data in the envelope that the client is able to mutate - then it could be a sign to break that data out into a separate resource with the list as a sub collection. Rubbish example:
/animals
{
"farmName": "farm",
"paging": {},
"animals": [
...
]
}
Could be broken up into:
/farm/1
{
"id": 1,
"farmName": "farm"
}
and
/farm/1/animals
{
"paging": {},
"animals": [
...
]
}
Note: Even with this split, you could still return both combined as a single response using something like Facebook's or LinkedIn's field expansion syntax. E.g. http://example.com/api/farm/1?field=animals.offset(0).limit(10)
In response, to your question about how the client should know what the JSON payload they are POSTing and PUTing should look like - this should be reflected in your API documentation. I'm not sure if there is a better tool for this, but Swagger provides a spec that allows you to document what your request bodies should look like using JSON Schema - check out this page for how to define your schemas and this page for how to reference them as a parameter of type body. Unfortunately, Swagger doesn't visualise the request bodies in it's fancy web UI yet, but it's is open source, so you could always add something to do this.
Original Answer
Check out William's comment in the discussion thread on that page - he suggests a way to avoid the exploit altogether which means you can safely use a JSON array at the root of your response and then you need not worry about either of you questions.
The exploit you link to relies on your API using a Cookie to authenticate a user's session - just use a query string parameter instead and you remove the exploit. It's probably worth doing this anyway since using Cookies for authentication on an API isn't very RESTful - some of your clients may not be web browsers and may not want to deal with cookies.
Why Does this fix work?
The exploit is a form of CSRF attack which relies on the attacker being able to add a script tag on his/her own page to a sensitive resource on your API.
<script src="http://mysite.com/api/columns"></script>
The victims web browser will send all Cookies stored under mysite.com to your server and to your servers this will look like a legitimate request - you will check the session_id cookie (or whatever your server-side framework calls the cookie) and see the user is authenticated. The request will look like this:
GET http://mysite.com/api/columns
Cookie: session_id=123456789;
If you change your API you ignore Cookies and use a session_id query string parameter instead, the attacker will have no way of tricking the victims web browser into sending the session_id to your API.
A valid request will now look like this:
GET http://mysite.com/api/columns?session_id=123456789
If using a JavaScript client to make the above request, you could get the session_id from a cookie. An attacker using JavaScript from another domain will not be able to do this, as you cannot get cookies for other domains (see here).
Now we have fixed the issue and are ignoring session_id cookies, the script tag on the attackers website will still send a similar request with a GET line like this:
GET http://mysite.com/api/columns
But your server will respond with a 403 Forbidden since the GET is missing the required session_id query string parameter.
What if I'm not authenticating users for this API?
If you are not authenticating users, then your data cannot be sensitive and anyone can call the URI. CSRF should be a non-issue since with no authentication, even if you prevent CSRF attacks, an attacker could just call your API server side to get your data and use it in anyway he/she wants.
I would go for 'd' because it clearly separates the 'envelope' of your resource from its content. This would also make it easier for consumers to parse your responses, as opposed to 'guessing' the name of the wrapping property of a given resource before being able to access what it holds.
I think you're talking about two different things:
POST request should be sent in application/x-www-form-urlencoded. Your response should basically mirror a GET if you choose to include a representation of the newly created resource in your reply. (not mandatory in HTTP).
PUTs should definitely be symmetric to GETs. The purpose of a PUT request is to replace an existing resource representation with another. It just makes sense to have both requests share the same conventions, doesn't it?
Go with 'Columns' because it is semantically meaningful. It helps to think of how JSON and XML could mirror each other.
If you would PUT the collection back, you might as well use the same media type (syntax, format, what you will call it.)

Normal form submission vs. JSON

I see advantages of getting JSON response and formatting in client side but are there any advantages by using JSON for form submission compared to normal submission?
One thing comes to mind when dealing with POST data is useless repetition:
For example, in POST we have this:
partners[]=Apple&partners[]=Microsoft&partners[]=Activision
We can actually see, that there is a lot of repetition here, where as when we are sending JSON:
{"partners":["Apple","Microsoft","Activision"]}
59 characters versus 47. In this small sample it looks miniscule, but these savings could go up and up, and even several bytes will save you some data. Of course, there is the parsing of data on server side, which could even out the differences, but still, i saw this sometimes helping when dealing with slower connections (looking at you, 3G and EDGE).
I didn't see any mention of file submission so I thought I'd chime in. FormData seems to be the standard way of submitting files to a server over AJAX. I didn't come across any solutions using JSON but there might be a way to serialize/deserialize files ...
It probably depends on your server side application. You are usually posting data to servers using POST, so how do you format underline data depends on how do you want for your server to process it. POST provides some form of key->value protocol, while in JSON you can put more than that. You can also transfer json using GET by placing it in url.
You must look on json as a way how data is written, while normal submission with POST should give you just a way how you transport data(of course you can abuse key->value feature of it for ordering your data).
There exists protocols on top of HTTP, that could help you define interface to your web application. One good example is RESTfull http://en.wikipedia.org/wiki/Representational_state_transfer
Specifically for submitting forms, I don't see any advanatages, POST was designed for this in a first place. There are cases where you want to transmit not only data from form, but also some metadata in this case json might help you by encoding form data(with metadata) in some json format, but at the end you will be still abusing POST for transferig this json data.
Hope I answered your question.
I don't see any apparent advantages for basic form submission. But when it comes to handling complex structures you'll start to realize the advantage of organizing your data.
So if you have a simple contact form (name, email, message) stick with normal form POSTing. But think about submitting a complete user's CV for example, it's very annoying to handle the massive amount of variables in your server-side script.
Here's an example for using JSON with PHP
//Here are the submission data
{
"personalInformation": {
"name": "hey",
"age": "20"
},
"education": {
"entry1": {
"type": "Collage",
"year": "2012"
},
"entry2": {
"type": "Highschool",
"year": "2010"
}
}
}
$CV_Data = json_decode($_POST['json_form'], true);
$CV_Data['personalInformation']['name'];
$CV_Data['personalInformation']['age'];
//Or you can loop
foreach($CV_Data['education'] as $entry){
$entry['type'];
$entry['year'];
}
As you can see, using JSON here makes it a lot easier for you to work on your data.
I am actually wrestling with the same problem. My use case requires that a potentially complex tree to be posted to the server. Some framework are able to decode two dimensional arrays as form attributes (Spring WebMVC is one I know of). Even so, this only helps you in the specific case when you are sending a nested array. The inherent nature of name-value-pair makes it unsuitable for transmitting a tree more than one level deep. In the past, I have used hacks like sending URL-encoded JSON as attribute value:
val0=%7B%22name%22%3A%22value%22%7D&val1=something&val2=something+else
However, this approach gets messy and difficult to debug when the object becomes more complex. In addition, many frameworks provide tools that automagically map JSON form post to objects (e.g.: Jackson for Java), it seems obtuse not to take advantage of these tools.
So ultimately, the choice rests on the complexity of the object you are sending. If the object is limited to one level deep, use straight name-value-pair; if the object is complex and involves a deeply-nested tree, use JSON.