REST API, why no HTML instead of JSON? - html

This is probably a stupid idea, but I would like to know why.
I'm reading about REST API's, and principles such as HATEOAS. All the time, I'm wondering why people don't just use HTML for the representation of their resources.
Sure, I can think of disadvantages such as parsing difficulties and increased data, but on the other hand, it's a semantical hypermedia language with which you can separate data from presentation. Also, it's human readable and people can interact with it in the browser, follow links, submit forms, etc. It could be used as both an API and UI.
Can anyone explain why it is a terrible idea to use HTML for REST API representations?

The www uses html for REST!
There's nothing wrong with the idea at all. Personally I would congratulate you on your questioning this in the first place, many don't.
Rest does not mandate an application protocol, it's just that JSON/XML has become the standard choice (as HTML is usually hard to parse). If you use a simplified version of HTML you might actually find it more useful than JSON.
I've written several rest applications that accept both application/json and text/html for content negotiation. It allows for easy testing on a browser.
As you mention, it certainly makes HATEOAS easier!
JSON does not (currently) have a standard mechanism for dealing either with HATEOAS or with strong typing (most people use the #class way of specifying what object the json represents). JSON is in my opinion, not finished yet.
XML on the other hand is.. but what is HTML if it isn't a kind of XML?
With html :
<div name="Elvis Presley" id="1" class="com.graceland.elvis.Person">
wife
<span name="country" class="java.lang.String">USA</span>
</div>
Good luck trying to replicate that with Json. Json doesn't effectively handle 'attributes' for starters!

Can anyone explain why it is a terrible idea to use HTML for REST API
representations?
Yes
It is not well formed
How would clients parse the result consistently ?
The markup is verbose
It is not a format meant for consumption by machines. It is a view for humans. REST APIs are meant for machine consumption.
Large responses are bloated and would lead to more network latency.
As for presentation, you cannot assume that the API would be consumed by a browser. What about native Android / iOS apps ?

REST supports all kinds of content included HTML. It's clear that most of RESTful applications and Web APIs are focused on data. So such formats like JSON, XML and YAML are more convenient to build and parse.
But if you want to leverage the Conneg feature (content negociation - based on the header Accept) of REST, you can handle several kinds of content according to the caller:
a browser. Perhaps we would prefer to display an HTML content to display UI for the request. You would have: Accept: text/html.
an application. In this case, you rather expect some structured data. You would have something like that: Accept: application/json, Accept: application/xml, and so on.
In fact, it's up to the RESTful applications. I built RESTful applications that implement conneg and send back several kinds of content according the specified header Accept.
Hope it helps,
Thierry

REST is about communication between machines. HTML contains a lot of GUI elements, it contains CSS, JS, etc... as well. All of these are for humans to display the view. The machines are interested only in the data and its annotation.
Btw. it is possible to use HTML as a data transfer format by REST. For example HAL has (or just had?) a HTML serialization format and Hydra can use HTML as well e.g. with microdata.
If you are talking about HTML which can be used both by the browsers and the REST clients (which extract only the data), then I think it is usually hard to write such a HTML document.

tl;dr: If we assume that XML isn't a terrible idea for a REST API, I think it would be reasonable to use a strict subset of XHTML (JSON is a strict subset of JavaScript), especially if HATEOAS is important to your API.
The fundamental benefit of HTML for a REST API is the <a href=""> and the <form action=""> tags (you can possibly even simplify it down to just the form tag). It's defined to handle Hypermedia and it's the only well understood way of linking documents. You don't have to read a JSON-LD / HAL / Siren spec to understand the structure of the HTML.
Other's here argue against it because HTML contains <h1> tags. But you can use a strict subset of HTML rather than trying to create a superset of JSON. JSON is effectively a strict subset of JavaScript objects. Personally I think this would make an excellent REST API - easy to understand by both humans and machines.
I initially thought that microdata is close to what you want but that only handles the GET for HTTP, you need methods for handling all the other HTTP methods (hence the need for the <form> tag). If you do only care about GET requests I think that might work for you. You asked about JSON-LD in one of your comments and in the Schema.org wikipedia page you can see the similarity between micro data and JSON-LD.
microdata
<div itemscope itemtype="http://schema.org/Movie">
<h1 itemprop="name">Avatar</h1>
<div itemprop="director" itemscope itemtype="http://schema.org/Person">
Director: <span itemprop="name">James Cameron</span>
(born <time itemprop="birthDate" datetime="1954-08-16">August 16, 1954</time>)
</div>
<span itemprop="genre">Science fiction</span>
Trailer
</div>
JSON-LD
<script type="application/ld+json">+schema app
{
"#context": "http://schema.org/",
"#type": "web master",
"name": "schema.org/person",
"Struturedata":
{
"#type": "Person",
"name": "chema mpnrroy josepinedamonroy",
"birthDate": "10/19/1982"
},
"geng": "male",
"Mecanismo":microdata. ".estructuredate./" validador
}
</script>
I think the major issue is that HATEOAS doesn't provide enough tangible benefit to developers, they just want to transfer data not have a self-discoverable API. The self-discovery just isn't that important because someone interfacing with your API only needs to discover the relevant URL once and then as long as your API doesn't change they don't have to care any more. Further even if you did write a fully HATEOAS supported REST API, the main benefit is supposed to be that clients don't need to hard code URLs and so it doesn't matter if you change the URLs. However you've no way of preventing API clients from not hard-coding the URLs and so if you ever do change the structure then you're going to have unhappy clients. Take the web for example, it's a (mostly) properly implemented REST API but link rot is still a major issue because everyone still relies on fixed URLs.
Then if links aren't that important, the simplicity of JSON wins out. Being able to represent both arrays and objects so naturally is hard to argue against. The entire spectrum of programming languages care fundamentally about arrays (lists) and objects (dictionaries/maps). The fact that you can't simply represent an array in XML or HTML is a major drawback.
Another problem against it, is that a large proportion of web developers are programming in JavaScript and then it's a no brainer to interop with JSON and you have to have major benefits to persuade your boss to use something else.

Related

Is XML really more semantic that HTML with classes/ids?

I'm coming from a HTML / JavaScript / PHP background and have recently started learning XML.
I was reading this excerpt from "No Nonsense XML Web Development with PHP" which includes this comparison:
<div>
<div>
<h2>Product One</h2>
<p>Product One is an exciting new widget that will simplify your life.</p>
<p><b>Cost: $19.95</b></p>
<p><b>Shipping: $2.95</b></p>
</div>
</div>
Take a good look at this – admittedly simple – code sample from a computer’s perspective. A human can certainly read this document and make the necessary semantic leaps to understand it, but a computer couldn’t. ....
A computer program (and even some humans) that tried to decipher this document wouldn’t be able to make the kinds of semantic leaps required to make sense of it. The computer would be able only to render the document to a browser with the styles associated with each tag. HTML is chiefly a set of instructions for rendering documents inside a Web browser; it’s not a method of structuring documents to bring out their meaning.
The author then compares this to XML with this:
If the above document were created in XML, it might look a little like this:
<productListing title="ABC Products">
<product>
<name>Product One</name>
<description>Product One is an exciting new widget that will simplify your life.</description>
<cost>$19.95</cost>
<shipping>$2.95</shipping>
</product>
</productListing>
In theory, we should be able to look at any XML document and understand instantly what’s going on. In the example above, we know that a product listing contains products, and that each product has a name, a description, a price, and a shipping cost. You could say, rightly, that each XML document is self-describing, and is readable by both humans and software.
I get the author's point to a degree. Of course a computer would not be able to discern meaning from this HTML, there's no context.
However, I would never expect the HTML to be written in this way. Rather I would expect the HTML to use classes and/or ids to provide the necessary context more like:
<div class="productListing">
<div class="product">
<h2 class="name">Product One</h2>
<p class="description">Product One is an exciting new widget that will simplify your life.</p>
<p class="cost"><b>Cost: $19.95</b></p>
<p class="shipping"><b>Shipping: $2.95</b></p>
</div>
</div>
Given this example, my question is:
Is XML really more semantic than HTML that utilizes classes/ids to provide context to the data it contains?
(Note that I simplified the code examples to avoid TL;DR)
This is an interesting question.I'll give you my two cents.
I jumped onto XML a few years ago when I had to built a dynamic website and my client didn't have access to the database(just FTP access).What I essentially coded was an XML backend and PHP which fetched this through SimpleXML parsing.
In retrospect, I do think XML is more semantically richer than HTML. As a comment pointed out above, the html class has been a styling construct. I don't remember personally using/ hearing anyone using classes or ids for purposes other than CSS/JS based styles or animations.
The key in using XML over HTML with classes was the flexibility to throw it around. For another project, updating values of XML elements from one system, and then having them read and displayed by an other system made a lot of things smoother.Additionally, the XML parsing libraries allow a number of functions for parsing through the nodes.
Also it's important to note that XML allows you to define attributes.This could be viewed as something similar to classes and ids to HTML.
Also, let's not forget that RSS feeds are essentially XML and not HTML with more tags.
Therefore, answering your question specifically with respect to semantic, I definitely think XML has the advantage there.
TLDR:XML is more semantic according to me
You are correct that in terms of just looking at markup, there is little do none difference between XML's "meaningful" element names, and HTML class/id. However, keep in mind that for XML, there is a set of technologies and tools that allow you to easily work with element names. You can write schemas and validate against them. You can compose schemas by using namespaces. You can extract structures by using simple XPath expressions. All of this is much harder with the HTML approach.
So if you have requirements to capture and process "meaningful" structures, then XML is your friend. If all you want is to have snapshot of something where you can say "this is a product", then maybe there really might be not such a big difference.
My advice would be: If you store and process data using multiple publishing pipelines, XML very likely is a much better starting point. If all you want is capture snapshots that will get delivered to HTML-based consumers, then "semantically enriched" HTML may be the easier way to go.

Alternative to HTML standard for expressing static documents content

The content tends to be mixed with it's form when expressed as a HTML+CSS+JS document. Almost every modern website requires CSS and/or JavaScript to be readable. Most of them are not easy to parse automatically because they relay on web browser to render it. Sections of the document are defined using visual clues, colors and formatting. One can use HTML5 tags like <article> but those are not a part of any bigger structure as far as I know, and still can contain non-content elements.
Websites are basically apps or clients.
Is there any standard that can be used to serve content of a website that has a well defined schema? An API for websites that could be used to express content in the form that is easy to server, parse, store, cryptographically sign...
I'm aware of formats like XML and JSON but I have not managed to find any standardized way to express a blog post as a JSON document.
An example of what I have in mind:
This question can be fetched as an JSON document using Stackexchange API. The result is machine readable and easy to parse but is in not standardized. It reflects details of Stackexchange specific data structures. Other QA website will have different API, with different structure and formats even though both have questions and answers.
There are two important standards out there dealing with the semantic aspect of a web page, like the one you are looking for. Microdata and RDFa. With their aid, you can pick a certain open vocabulary to describe your data or create your own based on them.
With JSON-LD also, you can create a schema for JSON documents like the XML schema is for the XML documents.

REST, hypertext and non-browser clients

I am confused on how a REST api can both be hypertext driven, but also machine readable. Say I design an API and some endpoint lists contents of a collection.
GET /api/companies/
The server should return a list with company resources e.g:
/api/companies/adobe
/api/companies/microsoft
/api/companies/apple
One way would be to generate a hypertext (html) page with <a> links to the respective resources. However I would also like to make it easy for a non-browser client to read this list. For example, some client might want to populates a dropdown gui with companies. In this case returning a html document is inappropriate, and it might be better to return a list in JSON or XML format.
It is not clear to me how REST style can satisfy both. Is there a practical solution or examples of a REST api that is both nice to browsers and non-browser clients?
What you're looking for is nowadays referred to as HATEOAS API's. See this question for examples: Actual examples for HATEOAS (REST-architecture)
The ReST architectural style, as originally defined by Roy Fielding, prescribes "hypertext as the engine for application state" as one of the architectural contraints. However, this concept got more or less "lost in translation" when people started equaling "RESTful API's" with "using the HTTP verbs right" (plus a little more, if you're lucky). (Edit: Providing credence for my assertion are the first and second highest-ratest answers in What exactly is RESTful programming? . The first talks only about HTTP verbs).
Some thoughts on your question: (mainly because the subject keeps fascinating me)
In HATEOAS, standardized media types with precise meaning are very important. It's generally thought to be best to reuse an existing media type when possible, to benefit from general understanding and tooling around this. One popular method is using XML, because it offers both general structure for data and a way to define semantics, i.e. through an XML schema or with namespaces. XML in and by itself is more or less meaningless when considering HATEOAS. The same applies for JSON.
For supporting links, you want to choose a media type that either supports links "natively" (i.e. text/html, application/xhtml+xml) or a media type that allows defining what pieces in the document must be interpreted as links through some embedded metadata, such as XML can with for example XLINK. I don't think you could use application/json because JSON by itself has no pre-defined place to define metadata. I do think that it would be possible to design a media type based on json - call it application/x-descriptive-json - that defines up-front that the JSON document returned must consist of a "header" and "body" property where header may contain further specified metadata. You could also design a media type for JSON just to support embedded links. Simpler media type, less extenisble. I wouldn't be surprised if both things I describe already exist in some form.
To be both nice to browsers and non-browser clients, all it takes is respecting the Accept header. You must assume that a client who asks for text/html is truly happy with text/html. This could be an argument for not using text/html as the media type for your non-browser API entry point. In principle, I think it could work though if the only thing you want is links. Good HTML markup can be very well consumed by non-browser clients. HTML also defines way to do paging, through rel="next", rel="previous".
The three biggest problems of a singular media type for both browsers and non-browsers I see are:
you must ensure all your site html is outputted with non-browser consumption in mind, i.e. embed sufficient metadata. Perhaps add hidden links in some places. It's a bit comparable from thinking about accessibility for visually impaired people: Though now, you're designing for a consumer who cannot read English, or any natural language for that matter. :)
there may be lots of markup and content that may essentially irrelevant to a non-browser client. Think of repeating header and footer text, navigation area's that kind of things.
HTML may simply lack the expressiveness you need. In principle, as soon as you go "think up" some conventions specific to your site (i.e. say rel="original-image means the link to the full-size, original image), then you're not doing strictly HATEOS anymore (at least, that's my understanding). HTML leaves no room for defining new meaning to elements. XML does.
A work-around to problem three might be using XHTML, since XHTML, by the virtue of being XML, does allow specifying new kinds of elements through namespaces.
I see #robert_b_clarke mentioning Microformats, which is relevant in this discussion. It's indeed one way of trying to improve accessibility for non-human agents. The main issue with this from a technical point of view is that it essentially relies on "out-of-band" information. Microformats are not part of the text/html spec. In a way, it's comparable to saying: "Hey, if I say that there's a resource with type A and id X, you can access it at mysite.com/A/X." The example I gave with rel=original-image could be called a micro-format as well. But it is a way to go. "State in your API docs: We serve nicely formatted text/html. Our text/html also embeds the following microformats: ..." You can even define your own ones.
I think the following presentation as a nice down-to-earth explanation of HATEOAS:
http://www.slideshare.net/apigee/hateoas-101-opinionated-introduction-to-a-rest-api-style
Edit:
I only now read about HTML5 microdata (because of #robert_b_clarke). It seems like HTML5 does provide a way for supplying additional information beyond what's possible with standard HTML tags. Consider what I wrote dated. :) Edit edit: It's only a draft, phew. ;)
Edit 2
Re a "descriptive JSON" format: This has just been announced http://jsonapi.org/ . They have applied for their own mime type. It's by Yehuda Katz (Ember.js) and Steve Klabnib, who's writing Designing Hypermedia API's.
The HTTP Accept header can be used by clients to request a response in a specific content type. For example, your REST API clients might request JSON data using the following header:
GET http://yourdomain.com/api/companies
Accept: application/json
So your server app can then serve JSON or HTML for the same URL depending on the value of the Accept header. Of course all your REST client apps will have to include that header, which may or may not be practical.
There are numerous alternative approaches, one of which is to serve the same XHTML content to both browsers and client apps. You can use HTML5 microdata or Microformats to embed structured data within HTML. That approach has a number of limitations. API client requests will result in larger, more complicated responses than necessary as they will include a load of stuff that's only usable by a web browser. There are also other differences in behaviour you might like to enforce. For instance you would probably want an unauthorized GET request for a protected resource to result in an HTTP 401 response for a machine client, and a redirect to login page for a web browser.
You may find that the easiest way is to be less principled and serve the human friendly and machine friendly versions of your resources through separate URLs
http://yourdomain.com/companies
http://yourdomain.com/api/companies
I've seen this question answered several ways. Some developers add a request parameter to indicate the format of the response, as in /api/companies/?rtnType=json. This method may be acceptable in a small application. It is a departure from true RESTful theology though.
The better way (in Java at least) is to use something like the Spring Framework. Spring can provide dynamic response formatting based on the media type in the HTTP request. The book "Spring in Action" (Walls, 2011) has an excellent explanation of this in chapter 11. And there are similar ways to accomplish dynamic response formatting in other languages without breaking REST.

Datapower - To parse HTML

I have a situation where the underlying application provides a UI layer and this in turn has to be rendered as a portlet. However, I do not want all parts of the UI originally presented to be rendered in Portlet.
Proposed solution: Using Datapower for parsing an XML being a norm, I am wondering if it is possible to parse a HTML. I understand HTML may not be always well formed. But if there are very few HTML pages in underlying application, then a contract can be enforced..
Also, if one manages to parse and extract data out of HTML using DP, then the resultant (perhaps and XML) can be used to produce HTML5 with all its goodies.
So question: Is it advisable to use Datapower to parse an HTML page to extract an XML out of it? Prerequisite: number of HTML pages per application could vary in data but not with many pages.
I suspect you will be unable to parse HTML using DataPower. DataPower can parse well-formed XML, but HTML - unless it is explicitly designed as xHTML - is likely to be full of tags that break well-formedness.
Many web pages are full of tags like <br> or <ul><li>Item1<li>Item2<li>Item3</ul>, all of which will cause the parsing to fail.
If you really want to follow your suggested approach, you'll probably need to do something on a more flexible platform such as WAS where you can build (or reuse) a parser that takes care of all of that for you.
If you think about it, this is what your web browser does - it has all the complex rules that turn badly-formed XML tags (i.e. HTML) into a valid DOM structure. It sounds like you may be better off doing manipulation at the level of the DOM rather than the HTML, as that way you can leverage existing, well-tested parsing solutions and focus on the structure of the data. You could do this client-side using JavaScript or you could look at a server-side JavaScript option such as Rhino or PhantomJS.
All of this might be doing things the hard way, though. Have you confirmed whether or not the underlying application has any APIs or web services that IT uses to render the pages, allowing you to get to the data without the existing presentation layer getting in the way?
Cheers,
Chris
Question of parsing and HTML page originates when you want to do some processing over it. If this is the case you can face problems because datapower by default will not allow hyperlinks inside the well formed XML or HTML document [It is considered to be a security risk], however this can be overcome with appropriate settings in XML manager present.
As far as question of HTML page parsing is concerned, Datapower being and ESB layer is expected to provide message format translation and that it indeed does. So design wise it is a good place to do message format translation. Practically however you will face above mentioned problem when you try to parse HTML as XML document.
The parsing can produce any message format model you wish [theoretically] hence you can use the XSLT to achieve what you wish.
Ajitabh

What are the advantages of creating web pages with XML instead of HTML?

From time to time, I see web pages whose content is solely written in XML (not HTML or XHTML). These pages usually have some style sheets (either XSLT or CSS) attached to them which makes them look like any other ordinary web page.
My question is, what are the advantages of such an approach (if any), and why would anyone choose to work this way?
EDIT: If this is a good thing, why is it not widespread?
EDIT 2: Thanks everyone for the great responses. They really enlightened me. I also found this question whose content is also related.
It's easier to generate it programmatically and reuse it for other purposes than displaying as webpage.
Update:
EDIT: If this is a good thing, why is it not widespread?
Not everyone needs to generate it programmatically or reuse it for other purposes than displaying as webpage. It's then easier to use plain HTML.
One possible advantage would be for use of the data of the page in something other than a web browser; that would (presumably) be easier to do if a page's content were well-formed XML. Of course in theory a well-formed, semantic XHTML page should be nearly as able to be parsed, as well.
It can also be easier to generate XML instead of XHTML, depending on the data source.
When you are getting XML data in to your system, and you are supposed to present this XML data then it is much easier to write some XSLT for that XML instead of parsing it using some sort of parser and then presenting the data.
That can be a valid point for using XML instead of XHTML or HTML
Update
To answer your question on why this is not widespread, is because XSTL is tedious and hard to work with. Specifically XPath, which can be for some people quite difficult to use.
Those pages use XSLT to get rendered on the client side. Not every browser (especially older ones) supports rendering XML + XSLT. XML can however be used server-side as template and get transformed to HTML by the application running on the server. I personally don't see any advantages to this approach.
There are a lot more web pages that are written solely in XML than you know. You're only seeing the ones that do the XSLT transformation on the client side. Server-side transformation of XML is not at all unusual, because there's a plethora of things that produce data in XML, and transforming XML to HTML in XSLT is straightforward. You'll never know this is happening if you just look at the HTML, which bears no signs of having been generated via XSLT.
Personally, I don't understand it either though one of the biggest problems is support in IE. I created a skeleton ecommerce site serving XML, transformed by XSLT and styled using CSS. I sorely missed the ability to use XLink and other wonderful XML features. It's also nice to be able to tag the data for what it is. I used a 'menu' tag for the restaurant menus. 'price' tags for prices and so on. If a user clicked on a link to change menus, all I had to do was send the name of the item, the price and the description instead of the complete page. iirc, a 4K or more HTML menu page was only 200 bytes of sent data.
As far as the "one error makes everything crash in XML" type comments, the same is true of any programming language so proper coding should be no bother for programmers and careful HTML/CSS types.
Before anyone says that what I did was actually XHTML...no. I served XML. I did call up XHTML namespaces when needed for links, images and HTML type things but only when necessary.