What is the difference between a simple Async servlet and the Comet / Bayeux protocol?
I am trying to implement a "Server Push" (or "Reverse Ajax") kind of webpage that will receive updates from the server as and when events occur on the server. So even without the client explicitly sending a request, I need the server to be able to send responses to the specific client browser.
I understand that Comet is the umbrella term for these kind of technologies; with 'Bayeux' being the protocol. But when I looked through the servlet spec, even the 'Async servlet' seems to accomplish the same thing. I mean I can define a simple servlet with the
<async-supported>
attribute set to true in the web.xml; and that servlet will be able to asynchronously send responses to the client. I can then have a jQuery or ExtJS based ajax client that just keeps doing a
long_polling()
call into the servlet. Something like what is described in the link below
http://www.ibm.com/developerworks/web/library/wa-reverseajax1/index.html#long
So my question is this:
What is the difference between a simple Async servlet and the Comet / Bayeux protocol?
Thanks
It is true that "Comet" is the term for these technologies, but the Bayeux protocol is used only by few implementations. A Comet technique can use any protocol it wants; Bayeux is one of them.
Having said that, there are two main differences between an async servlet solution and a Comet+Bayeux solution.
The first difference is that the Comet+Bayeux solution is independent of the protocol that transports Bayeux.
In the CometD project, there are pluggable transports for both clients and servers that can carry Bayeux.
You can carry it using HTTP, with Bayeux being the content of a POST request, but you can also carry it using WebSocket, with Bayeux being the payload of the WebSocket message.
If you use async servlets, you cannot leverage WebSocket, which is way more efficient than HTTP.
The second difference is that async servlets only carry HTTP, and you need more than that to handle remote Comet clients.
For example, you may want to identify uniquely the clients, so that 2 tabs for the same page result in 2 different clients. To do this, you need add a "property" to the async servlet request, let's call it sessionId.
Next, you want to be able to authenticate a client; only authenticated clients can get a sessionId. But to differentiate between first requests to authenticate and others subsequent requests already authenticated, you need another property, say messageType.
Next, you want to be able to notify quickly disconnections due to network loss or other connectivity problems; so you need to come up with a heart-beat solution so that if the heart beats you know the connection is alive, if it does not beat you know it's dead, and perform rescue actions.
Next you need disconnect features. And so on.
Quickly you realize that you're building another protocol on top of HTTP.
At that point, it's better to reuse an existing protocol like Bayeux, and proven solutions like CometD (which is based on Comet techniques using the Bayeux protocol) that gives you:
Java and JavaScript client libraries with simple yet powerful APIs
Java server library to perform your application logic without the need to handle low level details such as HTTP or WebSocket via annotated services
Transport pluggability, both client and server
Bayeux protocol extensibility
Lazy messages
Clustering
Top performance
Future proof: users of CometD before the advent of WebSocket did not change a line of code to take advantage of WebSocket - all the magic was implemented in the libraries
Based on standards
Designed and maintained by web protocols experts
Extended documentation
I can continue, but you get the point :)
You don't want to use a low-level solution that ties you to HTTP only. You want to use a higher level solution that abstracts your application from the Comet technique used and from the protocol that transports Bayeux, so that your application can be written once and leverage future technology improvements. As an example of technology improvement, CometD was working well way before async servlets came into picture, and now with async servlet just became more scalable, and so your application, without the need to change a single line in the application.
By using a higher level solution you can concentrate on your application rather than on the gory details of how to write correctly an async servlet (and it's not that easy as one may think).
The answer to your question could be: you use Comet+Bayeux because you want to stand on the shoulder of giants.
Related
I'm looking for a way to implement basic Publish / Subscribe between applications written in different languages, to exchange events with JSON payloads.
WebSocket seems like the obvious choice for the transport, but you need an (arguably small) layer on top to implement some of the plumbing:
aggreeing on messages representing the pubsub domain "subscribe to a topic", "publish a message"
aggreeing on messages for the infra ("heartbeat", "authentication")
I was expecting to find an obvious standard for this, but there does not seem to be any.
WAMP is often refered to, but in my (short) experience, the implementations of server / clients libraries are not great
STOMP is often refered to, but in my (even shorter) experience, it's even worse
Phoenix Channels are nice, but they're restricted to Phoenix/Elixir world, and not standard (so the messages can be changed at any phoenix version without notice.)
So, is everyone using MQTT/WS (which require another broker components, rather than simple servers ?) Or gRPC ?
Is everyone just re-implementing it from scratch ? (It's one of those things that seems easy enough to do oneselves, but I guess you just end up with an half-baked, poorly-specified, broken version of the thing I'm looking for...)
Or is there something fundamentally broken with the idea of serving streams of data from a server over WS ?
There are two primary classes of WebSocket libraries; those that implement the protocol and leave the rest to the developer, and those that build on top of the protocol with various additional features commonly required by realtime messaging applications, such as restoring lost connections, pub/sub, and channels, authentication, authorization, etc.
The latter variety often requires that their own libraries be used on the client-side, rather than just using the raw WebSocket API provided by the browser. As such, it becomes crucial to make sure you’re happy with how they work and what they’re offering. You may find yourself locked into your chosen solution’s way of doing things once it has been integrated into your architecture, and any issues with reliability, performance, and extensibility may come back to bite you.
ws, faye-websockets, socket.io, μWebSockets and SocketCluster are some good open-source options.
The number of concurrent connections a server can handle is rarely the bottleneck when it comes to server load. Most decent WebSocket servers can support thousands of concurrent connections, but what’s the workload required to process and respond to messages once the WebSocket server process has handled receipt of the actual data?
Typically there will be all kinds of potential concerns, such as reading and writing to and from a database, integration with a game server, allocation and management of resources for each client, and so forth.
As soon as one machine is unable to cope with the workload, you’ll need to start adding additional servers, which means now you’ll need to start thinking about load-balancing, synchronization of messages among clients connected to different servers, generalized access to client state irrespective of connection lifespan or the specific server that the client is connected to – the list goes on and on.
There’s a lot involved when implementing support for the WebSocket protocol, not just in terms of client and server implementation details, but also with respect to support for other transports to ensure robust support for different client environments, as well as broader concerns, such as authentication and authorization, guaranteed message delivery, reliable message ordering, historical message retention, and so forth. A data stream network such as Ably Realtime would be a good option to use in such cases if you'd rather avoid re-inventing the wheel.
There's a nice piece on WebSockets, Pub/Sub, and all issues related to scaling that I'd recommend reading.
Full disclosure: I'm a Developer Advocate for Ably but I hope this genuinely answers your question.
Could anybody explain advantage of using json-rpc over json-api and vice-versa? First and second format are JSON-based, but where I should use one, and where is another?
Note: I may come across a little biased. I am the author of the Json-RPC.net server library.
Json-RPC is a remote procedure call specification. There are multiple libraries you can use to communicate using that protocol. It is not REST based, and is transport agnostic. You can run it over HTTP as is very common, you can also use it over a socket, or any other transport you find appropriate. So it is quite flexible in that regard. You can also do server to client along with client to server requests with it by hosting the RPC server on either the client or the server.
Json-API is a specification for building REST APIs. There are multiple libraries you can use to get started with it. In contrast to Json-Rpc it requires you to host it on an HTTP server. You cannot invoke functions on the client with it. You cannot run it over a non-http transport protocol. Being REST based, it excels at providing information about Resources. If you want an API that is based around the idea of Create, Read, Update, Delete on some collections of resources, then this may be a good choice.
Json-API is going to be better if your API is resource-based, and you want your API to be browsable by a human without setting up documentation for it. Though that human would likely need to be in the software engineering field to make any sense of it.
Json-RPC is going to be better if your API is function based, or you want the flexibility it provides. Json-RPC can still be used to manipulate Resources by creating Create, Read, Update, and Delete functions for your resources, but you don't get the browsability with it not being REST based. It can still be explored (not browsed) by a human by generating documentation based off of the functions you expose.
A popular example of something that uses Json-Rpc is BitCoin.
There are a lot of popular REST-based API's and Json-API is a spec with a bunch of tools to help you do REST right.
--
Note: Neither of those (Json-RPC, or Json-API) are good when you consider for developer time, performance, or efficiently using network resources.
If you care about performance, efficiency, or developer time then take a look at Google's gRPC which is fantastic in those regards, and can still reduce developer time more than using a REST API as client and server code can be generated from a protocol definition file.
I'd like to login to a RESTful back-end server written in Laravel5, with the single page front-end application leveraging Polymer's custom element.
In this system, the persistence(CRUD) layer lives in the server. So, authentication should be done at the server in responding to client's api request. When a request is valid, the server returns User object in JSON format including user's role for access control in client.
Here, my questions is how I can keep the session, even when a user refreshes the front-end page? Thanks.
This is an issue beyond Polymer, or even just single page apps. The question is how you keep session information in a browser. With SPAs it is a bit easier, since you can keep authentication tokens in memory, but traditional Web apps have had this issue since the beginning.
You have two things you need to do:
Tokens: You need a user token that indicates that this user is authenticated. You want it to be something that cannot be guessed, else someone can spoof it. So the token better not be "jimsmith" but something more reliable. You have two choices. Either you can have a randomly generated token which the server stores, so that when presented on future requests, it can validate the token. This is how just most session managers work in app servers like nodejs sessions or Jetty session or etc. The alternative is to do something cryptographic so that the server only needs to validate mathematically, not check in a store to see if the token is valid. I did that for node in http://github.com/deitch/cansecurity but there are various options for it.
Storage: You need some way to store the tokens client-side that does not depend on JS memory, since you expect to reload the page.
There are several ways to do client-side storage. The most common by far is cookies. Since the browser stores them without your trying too hard, and presents them whenever you access the domain that the cookie is registered for, it is pretty easy to do. Many client-side and server-side auth libraries are built around them.
An alternative is html5 local storage. Depending on your target browsers and support, you can consider using it.
There also are ways you can play with URL parameters, but then you run the risk of losing it when someone switches pages. It can work, but I tend to avoid that.
I have not seen any components that handle cookies directly, but it shouldn't be too hard to build one.
Here is the gist for cookie management code I use for a recent app. Feel free to wrap it to build a Web component for cookie management.. as long as you share alike!
https://gist.github.com/deitch/dea1a3a752d54dc0d00a
UPDATE:
component.kitchen has a storage component here http://component.kitchen/components/TylerGarlick/core-resource-storage
Simplest way if you use PHP is to keep the user in a PHP session (like a normal non SPA application).
PHP will store the user info on the server, and generate automatically a cookie that the browser will send with any request. With a single server with no load balancing, the session data is local and very fast.
I'm looking for an easy way to implement the XMPP server running with the following protocol:
https://developers.google.com/cloud-print/docs/rawxmpp
The only difference is that I must use X-GOOGLE-TOKEN authentication mechanism: https://stackoverflow.com/a/6211324/227244
The procedure is simple: I get the token from the data sent by a client, request user data based on this token and set the JID accordingly, appending some random chars to the resulting JID.
After that other clients with possibly different tokens, but same user account, connect to the XMPP resource and for clients who are subscribed the broadcast of push notifications is enabled.
What amount of the server code can be borrowed from the currently available implementations? I would avoid writing all of the server code myself, though the logic is pretty simple. I know there're ejabberd and prosody xmpp servers which implement lots of XEP. Which one is easier to add the custom handling mechanism to? Can you suggest other stable alternatives for the core xmpp server?
The way google has designed X-OAUTH2 is dead simple and straightforward to implement. Infact, there is no difference between how PLAIN and X-OAUTH2 mechanisms work. You can simply pick a standard PLAIN implementation and make it work for google X-OAUTH2 authentication mechanism with no extra effort.
I am author of Jaxl PHP library and I recently announced support for X-OAUTH2 inside the library. Here you can see exact lines of code I had to write to support this. The only relevant piece of code is:
switch($mechanism) {
case 'PLAIN':
case 'X-OAUTH2':
$stanza->t(base64_encode("\x00".$user."\x00".$pass));
break;
For X-OAUTH2 implementation $pass is nothing but your oauth token. In short, password field from PLAIN auth mechanism becomes oauth token for X-OAUTH2 mechanism. Rest all remains the same.
Will the Websocket Api implementation of HTML5 make node.js irrelevant? If not, what are the differences between the two ans where to use what?
Node.js is a server side, event based, asynchronous I/O framework.
HTML5 WebSockets a pretty much TCP sockets in the Browser, that's all they don't di much more then establishing a two way channel of communication.
For example, you would write your game Server with Node.js and then use WebSockets to communicate between your Browser based client and the server.
An example of such a Game (disclaimer, I'm the author of the project):
http://github.com/BonsaiDen/NodeGame-Shooter
To get an idea what Node.js does, I recommend that you watch some talks that are listed on our Node.js tag wiki.
Actually, WebSockets makes Node far more applicable; a good number of the interesting server side implementations of WebSockets use Node.
In fact, a library that is growing very fast in popularity is Socket.IO. Socket.IO is a server-side (Node) and client side library that allows you to rapidly create interactive web applications. The client and server coordinate to pick the best communication mechanism available to both (WebSockets is the preference but falls back to long-polling). The client and server side Javascript library interface is very similar (and both are Javascript) so it's very easy to create web applications quickly.