GWT / Comet: any experience? - json

Is there any way to "subscribe" from GWT to JSON objects stream and listen to incoming events on keep-alive connection, without trying to fetch them all at once? I believe that the buzzword-du-jour for this technology is "Comet".
Let's assume that I have HTTP service which opens keep-alive connection and put JSON objects with incoming stock quotes there in real time:
{"symbol": "AAPL", "bid": "88.84", "ask":"88.86"}
{"symbol": "AAPL", "bid": "88.85", "ask":"88.87"}
{"symbol": "IBM", "bid": "87.48", "ask":"87.49"}
{"symbol": "GOOG", "bid": "305.64", "ask":"305.67"}
...
I need to listen to this events and update GWT components (tables, labels) in realtime. Any ideas how to do it?

There is a GWT Comet Module for StreamHub:
http://code.google.com/p/gwt-comet-streamhub/
StreamHub is a Comet server with a free community edition. There is an example of it in action here.
You'll need to download the StreamHub Comet server and create a new SubscriptionListener, use the StockDemo example as a starting point, then create a new JsonPayload to stream the data:
Payload payload = new JsonPayload("AAPL");
payload.addField("bid", "88.84");
payload.addField("ask", "88.86");
server.publish("AAPL", payload);
...
Download the JAR from the google code site, add it to your GWT projects classpath and add the include to your GWT module:
<inherits name="com.google.gwt.json.JSON" />
<inherits name="com.streamhub.StreamHubGWTAdapter" />
Connect and subscribe from your GWT code:
StreamHubGWTAdapter streamhub = new StreamHubGWTAdapter();
streamhub.connect("http://localhost:7979/");
StreamHubGWTUpdateListener listener = new StockListener();
streamhub.subscribe("AAPL", listener);
streamhub.subscribe("IBM", listener);
streamhub.subscribe("GOOG", listener);
...
Then process the updates how you like in the update listener (also in the GWT code):
public class StockListener implements StreamHubGWTUpdateListener {
public void onUpdate(String topic, JSONObject update) {
String bid = ((JSONString)update.get("bid")).stringValue();
String ask = ((JSONString)update.get("ask")).stringValue();
String symbol = topic;
...
}
}
Don't forget to include streamhub-min.js in your GWT projects main HTML page.

I have used this technique in a couple of projects, though it does have it's problems. I should note that I have only done this specifically through GWT-RPC, but the principle is the same for whatever mechanism you are using to handle data. Depending on what exactly you are doing, there might not be much need to over complicate things.
First off, on the client side, I do not believe that GWT can properly support any sort of streaming data. The connection has to close before the client can actually process the data. What this means from a server-push standpoint is that your client will connect to the server and block until data is available at which point it will return. Whatever code executes on the completed connection should immediately re-open a new connection with the server to wait for more data.
From the server side of things, you simply drop into a wait cycle (the java concurrent package is particularly handy for this with blocks and timeouts), until new data is available. At that point in time, the server can return a package of data down to the client which will update accordingly. There are a bunch of considerations depending on what your data flow is like, but here are a few to think about:
Is a client getting every single update important? If so, then the server needs to cache any potential events between the time the client gets some data and then reconnects.
Are there going to be gobs of updates? If this is the case, it might be wiser to package up a number of updates and push down chunks at a time every several seconds rather than having the client get one update at a time.
The server will likely need a way to detect if a client has gone away to avoid piling up huge amounts of cached packages for that client.
I found there were two problems with the server push approach. With lots of clients, this means lots of open connections on the web server. Depending on the web server in question, this could mean lots of threads being created and held open. The second has to do with the typical browser's limit of 2 requests per domain. If you are able to serve your images, css and other static content fro second level domains, this problem can be mitigated.

there is indeed a cometd-like library for gwt - http://code.google.com/p/gwteventservice/
But i ve not personally used it, so cant really vouch for whether its good or not, but the doco seems quite good. worth a try.
Theres a few other ones i ve seen, like gwt-rocket's cometd library.

Some preliminary ideas for Comet implementation for GWT can be found here... though I wonder whether there is something more mature.

Also, some insight on GWT/Comet integration is available there, using even more cutting-and-bleeding edge technology: "Jetty Continuations". Worth taking a look.

Here you can find a description (with some source samples) of how to do this for IBM WebSphere Application Server. Shouldn't be too different with Jetty or any other Comet-enabled J2EE server. Briefly, the idea is: encode your Java object to JSON string via GWT RPC, then using cometd send it to the client, where it is received by Dojo, which triggers your JSNI code, which calls your widget methods, where you deserialize the object again using GWT RPC. Voila! :)
My experience with this setup is positive, there were no problems with it except for the security questions. It is not really clear how to implement security for comet in this case... Seems that Comet update servlets should have different URLs and then J2EE security can be applied.

The JBoss Errai project has a message bus that provides bi-directional messaging that provides a good alternative to cometd.

We are using Atmosphere Framewrok(http://async-io.org/) for ServerPush/Comet in GWT aplication.
On a client side Framework has GWT integration that is pretty straightforward. On a server side it uses plain Servlet.
We are currently using it in production with 1000+ concurent users in clustered environment. We had some problems on the way that had to be solved by modifying Atmosphere source. Also the documentation is really thin.
Framework is free to use.

Related

Logging service allowing simple <img/> interface

I'm looking to do some dead-simple logging from a web app (client-side) to some remote service/endpoint. Sure, I could roll my own, but for the purpose of this task, let's assume I want an existing service like Logentries/Splunk/Logstash so that my viewers can still log debugging info if my backend goes down.
Most logging services offer an API where I can import some <script/> onto my page and then use an API like LE.log('string', data); [Logentries example]. However that pulls in a JS dependency and uses cross-domain XHR for probably well-founded reasons (like URI length limitations).
My question is if anyone can point me to a service that will let me send simple query params to a "pixel" endpoint (similar to how Google Analytics does it). Something like:
<script>
new Image().src = 'http://something.io/pixel/log/<API_TOKEN>?some_data=1234';
</script>
-- or, in pure HTML --
<img src="http://something.io/pixel/log/<API_TOKEN>?some_data=1234" style="display:none" />
I'd assume some of the big names in the logging-as-a-service space would have something like this but I've not found anything (or it's too specific to turn up any search results).
This would not be for analytics so much as error logging, debugging, etc. Fire-and-forget sort of stuff.
Any advice appreciated.
It's possible to do this with Logentries, they offer a pixel tracker.
They require that data is sent in a base64 encoding, but that's quite simple in Javascript.
From their documentation:
var encoded = encodeURIComponent(btoa("Log message"));
This data can then be used in a pixel tracker like this:
<img src="https://js.logentries.com/v1/logs/{API-TOKEN}?e={ENCODED_DATA}/">

Python 3.4 Sockets sendall function

import socket
def functions():
print ("hello")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('192.168.137.1', 20000)
sock.bind(server_address)
sock.listen(1)
conn, addr = sock.accept()
print ('Connected by', addr)
sock.listen(1)
conn.sendall(b"Welcome to the server")
My question is how to send a function to the client,
I know that conn.sendall(b"Welcome to the server") will data to the client.
Which can be decoded.
I would like to know how to send a function to a client like
conn.sendall(function()) - this does not work
Also I would like to know the function that would allow the client to receive the function I am sending
I have looked on the python website for a function that could do this but I have not found one.
The functionality requested by you is principally impossible unless explicitly coded on client side. If this were possible, one could write a virus which easily spreads into any remote machine. Instead, this is client right responsibility to decode incoming data in any manner.
Considering a case client really wants to receive a code to execute, the issue is that code shall be represented in a form which, at the same time,
is detached from server context and its specifics, and can be serialized and executed at any place
allows secure execution in a kind of sandbox, because a very rare client will allow arbitrary server code to do anything at the client side.
The latter is extremely complex topic; you can read any WWW browser security history - most of closed vulnerabilities are of issues in such sandboxing.
(There are environments when such execution is allowed and desired; e.g. Erlang cookie-based peering cluster. But, in such cluster, side B is also allowed to execute anything at side A.)
You should start with searching an execution environment (high-level virtual machine) which conforms to your needs in functionality and security. For Python, you'd look at multiprocessing module: its implementation of worker pools doesn't pass the code itself, but simplifies passing data for execution requests. Also, passing of arbitrary Python data without functions is covered with marshal and pickle modules.

InstanceContextMode.Single in WCF wsHttpBinding,webHttpBinding and REST

I have recently started development on a relatively simple WCF REST service which returns JSON formatted results. At first everything worked great, and the service was quickly up and running.
The main function of the service is to return a large chunk of data extracted from a database. This data rarely changes, so I decided to try and setup a caching mechanism to speed things up. To do this I planned to set InstanceContextMode.Single and ConcurrencyMode.Multiple, and then with some thread locks, safely return a static cached result. Every 5 minutes or so, or whenever IIS decides to clear everything, the data would be re-fetched from the database.
My issue is InstanceContextMode.Single does not behave as expected. My understanding is a single instance of my WCF service class should be created and maintained. However the behaviour I have is a completely new instance of my Class is created per call. This include re-initialising all static variables.
I tried changing the web service from webHttpBinding (used for REST) to wsHttpBinding and using the service as a SOAP config, but this results in exactly the same behaviour.
What am I doing wrong!!! Have spent way too long trying to figure this out.
Any help would be great!.
Strange, can you try this and tell me what happen then?
ServiceThrottlingBehavior ThrottleBehavior = new ServiceThrottlingBehavior();
ThrottleBehavior.MaxConcurrentSessions = 1;
ThrottleBehavior.MaxConcurrentCalls = 1;
ThrottleBehavior.MaxConcurrentInstances = 1;
ServiceHost Host = ...
Host.Description.Behaviors.Add(ThrottleBehavior);
And [how] do you know your single service instance isn't "Single"? You saw multiple database connection from profiler? Is that what suggested to you why your service isn't a single instance? From your service operation implementation, do you do some of the work on a separate thread?

Is there a way to 'listen' for a database event and update a page in real time?

I'm looking for a way to create a simple HTML table that can be updated in real-time upon a database change event; specifically a new record added.
In other words, think of it like an executive dashboard. If a sale is made and a new line is added in a database (MySQL in my case) then the web page should "refresh" the table with the new line.
I have seen some information on the new using EVENT GATEWAY but all of the examples use Coldfusion as the "pusher" and not the "consumer". I would like to have Coldfusion both update / push an event to the gateway and also consume the response.
If this can be done using a combination of AJAX and CF please let me know!
I'm really just looking to understand where to get started with real-time updating.
Thank you in advance!!
EDIT / Explanation of selected answer:
I ended up going with #bpeterson76's answer because at the moment it was easiest to implement on a small scale. I really like his Datatables suggestion, and that's what I am using to update in close to real time.
As my site gets larger though (hopefully), I'm not sure if this will be a scalable solution as every user will be hitting a "listener" page and then subsequently querying my DB. My query is relatively simple, but I'm still worried about performance in the future.
In my opinion though, as HTML5 starts to become the web standard, the Web Sockets method suggested by #iKnowKungFoo is most likely the best approach. Comet with long polling is also a great idea, but it's a little cumbersome to implement / also seems to have some scaling issues.
So, let's hope web users start to adopt more modern browsers that support HTML5, because Web Sockets is a relatively easy and scalable way to get close to real time.
If you feel that I made the wrong decision please leave a comment.
Finally, here is some source code for it all:
Javascript:
note, this is a very simple implementation. It's only looking to see if the number of records in the current datatable has changed and if so update the table and throw an alert. The production code is much longer and more involved. This is just showing a simple way of getting a close to real-time update.
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.js"></script>
<script type="text/javascript" charset="utf-8">
var originalNumberOfRecsInDatatable = 0;
var oTable;
var setChecker = setInterval(checkIfNewRecordHasBeenAdded,5000); //5 second intervals
function checkIfNewRecordHasBeenAdded() {
//json object to post to CFM page
var postData = {
numberOfRecords: originalNumberOfRecsInDatatable
};
var ajaxResponse = $.ajax({
type: "post",
url: "./tabs/checkIfNewItemIsAvailable.cfm",
contentType: "application/json",
data: JSON.stringify( postData )
})
// When the response comes back, if update is available
//then re-draw the datatable and throw an alert to the user
ajaxResponse.then(
function( apiResponse ){
var obj = jQuery.parseJSON(apiResponse);
if (obj.isUpdateAvail == "Yes")
{
oTable = $('#MY_DATATABLE_ID').dataTable();
oTable.fnDraw(false);
originalNumberOfRecsInDatatable = obj.recordcount;
alert('A new line has been added!');
}
}
);
}
</script>
Coldfusion:
<cfset requestBody = toString( getHttpRequestData().content ) />
<!--- Double-check to make sure it's a JSON value. --->
<cfif isJSON( requestBody )>
<cfset deserializedResult = deserializeJSON( requestBody )>
<cfset numberOFRecords = #deserializedResult.originalNumberOfRecsInDatatable#>
<cfquery name="qCount" datasource="#Application.DBdsn#" username="#Application.DBusername#" password="#Application.DBpw#">
SELECT COUNT(ID) as total
FROM myTable
</cfquery>
<cfif #qCount.total# neq #variables.originalNumberOfRecsInDatatable#>
{"isUpdateAvail": "Yes", "recordcount": <cfoutput>#qCount.total#</cfoutput>}
<cfelse>
{"isUpdateAvail": "No"}
</cfif>
</cfif>
This isn't too difficult. The simple way would be to add via .append:
$( '#table > tbody:last').append('<tr id="id"><td>stuff</td></tr>');
Adding elements real-time isn't entirely possible. You'd have to run an Ajax query that updates in a loop to "catch" the change. So, not totally real-time, but very, very close to it. Your user really wouldn't notice the difference, though your server's load might.
But if you're going to get more involved, I'd suggest looking at DataTables. It gives you quite a few new features, including sorting, paging, filtering, limiting, searching, and ajax loading. From there, you could either add an element via ajax and refresh the table view, or simply append on via its API. I've been using DataTables in my app for some time now and they've been consistently cited as the number 1 feature that makes the immense amount of data usable.
--Edit --
Because it isn't obvious, to update the DataTable you call set your Datatables call to a variable:
var oTable = $('#selector').dataTable();
Then run this to do the update:
oTable.fnDraw(false);
UPDATE -- 5 years later, Feb 2016:
This is much more possible today than it was in 2011. New Javascript frameworks such as Backbone.js can connect directly to the database and trigger changes on UI elements including tables on change, update, or delete of data....it's one of these framework's primary benefits. Additionally, UI's can be fed real-time updates via socket connections to a web service, which can also then be caught and acted upon. While the technique described here still works, there are far more "live" ways of doing things today.
You can use SSE (Server Sent Events) a feature in HTML5.
Server-Sent Events (SSE) is a standard describing how servers can initiate data transmission towards clients once an initial client connection has been established. They are commonly used to send message updates or continuous data streams to a browser client and designed to enhance native, cross-browser streaming through a JavaScript API called EventSource, through which a client requests a particular URL in order to receive an event stream.
heres a simple example
http://www.w3schools.com/html/html5_serversentevents.asp
In MS SQL, you can attach a trigger to a table insert/delete/update event that can fire a stored proc to invoke a web service. If the web service is CF-based, you can, in turn, invoke a messaging service using event gateways. Anything listening to the gateway can be notified to refresh its contents. That said, you'd have to see if MySQL supports triggers and accessing web services via stored procedures. You'd also have to have some sort of component in your web app that's listening to the messaging gateway. It's easy to do in Adobe Flex applications, but I'm not sure if there are comparable components accessible in JavaScript.
While this answer does not come close to directly addressing your question, perhaps it will give you some ideas as to how to solve the problem using db triggers and CF messaging gateways.
M. McConnell
With "current" technologies, I think long polling with Ajax is your only choice. However, if you can use HTML5, you should take a look at WebSockets which gives you the functionality you want.
http://net.tutsplus.com/tutorials/javascript-ajax/start-using-html5-websockets-today/
WebSockets is a technique for two-way communication over one (TCP) socket, a type of PUSH technology. At the moment, it’s still being standardized by the W3C; however, the latest versions of Chrome and Safari have support for WebSockets.
http://html5demos.com/web-socket
Check out AJAX long polling.
Place to start Comet
No, you can't have any db code execute server side code. But you could write a service to poll the db periodically to see if a new record has been added then notify the code you have that needs pseudo real-time updates.
The browser can receive real-time updates via BOSH connection to Jabber/XMPP server. All bits and pieces can be found in this book http://professionalxmpp.com/ which I highly recommend. If you can anyhow send XMPP message upon record addition in your DB, then it is relatively easy to build the dashboard you want. You need strophe.js, Jabber/XMPP server (e.g. ejabberd), http server for proxying http-bind requests. All the details can be found in the book. A must read which I strongly believe will solve your problem.
The way I would achieve the notification is after the database update has been successfully committed I would publish an event that would tell any listening systems or even web pages that the change has occurred. I've detailed one way of doing this using an e-commerce solution in a recent blog post. The blog post shows how to trigger the event in ASP.NET but the same thing can easily be done in any other language since ultimately the trigger is performed via a REST API call.
The solution in this blog post uses Pusher but there's not reason why you couldn't install your own real-time server or use a Message Queue to communication between your app and the realtime server, which would then push the notification to the web page or client application.

Accessing JBoss JMX data via JSON

Is there a way to access the JBoss JMX data via JSON?
I am trying to pull a management console together using data from a number of different servers. I can achieve this using screen scraping, but I would prefer to use a JSON object or XML response if one exists, but I have not been able to find one.
You should have a look at Jolokia, a full featured JSON/HTTP adapter for JMX.
It supports and has been tested on JBoss as well as on many other platforms. Jolokia
is an agent, which is deployed as a normal Java EE war, so you simply drop it into your
deploy directory within you JBoss installation. Also, there a some client libraries available, e.g. jmx4perl which allows for programatic access to the agent.
There is much more to discover and it is actively developed.
If you are using Java, then you can make small program that make JMX request to JBoss server and transform the response into XML/JSON.
Following is small code snippet. This may help you.
String strInitialProp = "javax.management.builder.initial";
System.setProperty(strInitialProp, "mx4j.server.MX4JMBeanServerBuilder");
String urlForJMX = "jnp://localhost:1099";//for jboss
ObjectName objAll = ObjectName.getInstance("*:*");
JMXServiceURL jmxUrl = new JMXServiceURL(urlForJMX);
MBeanServerConnection jmxServerConnection = JMXConnectorFactory.connect(jmxUrl).getMBeanServerConnection();
System.out.println("Total MBeans :: "+jmxServerConnection.getMBeanCount());
Set mBeanSet = jmxServerConnection.queryNames(objAll,null);
There are some jmx-rest bridges available, that internally talk JMX to MBeans and expose the result over REST calls (which can deliver JSON as data format).
See e.g. polarrose or jmx-rest-access. There are a few others out there.