why DeferredResult ends on setResult() on trying to use SSE - html

i am trying to implement a Server Sent Events (SSE) webpage which is powered by Spring. My test code does the following:
Browser uses EventSource(url) to connect to server. Spring accepts the request with the following controller code:
#RequestMapping(value="myurl", method = RequestMethod.GET, produces = "text/event-stream")
#ResponseBody
public DeferredResult<String> subscribe() throws Exception {
final DeferredResult<String> deferredResult = new DeferredResult<>();
resultList.add(deferredResult);
deferredResult.onCompletion(() -> {
logTimer.info("deferedResult "+deferredResult+" completion");
resultList.remove(deferredResult);
});
return deferredResult;
}
So mainly it puts the DeferredResult in a List and register a completion callback so that i can remove this thing from the List in case of completion.
Now i have a timer method, that will periodically output current timestamp to all registered "browser" via their DeferredResults.
#Scheduled(fixedRate=10000)
public void processQueues() {
Date d = new Date();
log.info("outputting to "+ LoginController.resultList.size()+ " connections");
LoginController.resultList.forEach(deferredResult -> deferredResult.setResult("data: "+d.getTime()+"\n\n"));
}
The data is sent to the browser and the following client code works:
var source = new EventSource('/myurl');
source.addEventListener('message', function (e) {
console.log(e.data);
$("#content").append(e.data).append("<br>");
});
Now the problem:
The completion callback on the DeferredResult is called on every setResult() call in the timer thread. So for some reason the connection is closed after the setResult() call. SSE in the browser reconnects as per spec and then same thing again. So on client side i have a polling behavior, but i want an kept open request where i can push data on the same DeferredResult over and over again.
Do i miss something here? Is DeferredResult not capable of sending multiple results? i put in a 10 seconds delay in the timer thread to see if the request only terminates after setResult(). So in the browser the request is kept open until the timer pushes the data but then its closed.
Thanks for any hint on that. One more note: I added async-supported to all filters/servlets in tomcat.

Indeed DeferredResult can be set only once (notice that setResult returns a boolean). It completes processing with the full range of Spring MVC processing options, i.e. meaning that all you know about what happens during a Spring MVC request remains more or less the same, except for the asynchronously produced return value.
What you need for SSE is something more focused, i.e. write each value to the response using an HttpMessageConverter. I've created a ticket for that https://jira.spring.io/browse/SPR-12212.
Note that Spring's SockJS support does have an SSE transport which takes care of a few extras such as cross-domain requests with cookies (important for IE). It's also used on top of a WebSocket API and WebSocket-style messaging (even if WebSocket is not available on either the client or the server side) which fully abstracts the details of HTTP long polling.
As a workaround you can also write directly to the Servlet response using an HttpMessageConverter.

Related

Handling intensive server-side tasks? Do I still use async/await in the front-end?

How do I handle really intensive server-side tasks, that can take multiple minutes? It's a user-facing task, so the user can give me some data, and the server will then work in the backend.
I am fairly new to this, but I think my browser won't "wait" for this long, if I am using async/await ? But then if I don't use async await, I won't know whether the task was completed successfully?
Or am I missing something here?
The bigger the task, the more brittle is a solution that depends on a single HTTP request/response. Imagine that the connection breaks after the task has been 99% completed. The client would have to repeat the whole thing.
Instead, I suggest a pattern like the following that depends on several HTTP requests:
The client (browser) makes a request like POST /starttask to start the task and receives a "task ID" in the response.
The task runs on the server while the client can do other things. Any results that the task computes are stored in a database under the task ID.
The client can check the task progress by making regular requests like GET /task/<taskID> and receive a progress notification (50% completed). This can be used to animate a "progress bar" on the UI.
When the task is 100% completed and has yielded a result that the client needs to know, it can retrieve that result with a request like GET /taskresult/<taskID>.
If the task result is huge, the client may want to repeat the result retrieval, perhaps with paging (GET /taskresult/<taskID>?page=1 and so on) until it has received and processed the entire result. This should not burden the server much, because it simply reads the task result from the database.
Finally, the client can delete the task result from the server database with another request like POST /taskcleanup/<taskID>.
Using await / async will work as this will wait forever until a promise (request to backend) has been fulfilled. You could show some kind of loading graphic to the user which is how other websites handle lengthy tasks.
Depends how big the task is, but an example if the task is fairly small (eg 10 seconds) we could use a 'loading' state as the way to identify if we should display loading graphic:
function example() {
setLoading(true);
try {
const response = await axios.get('/user?ID=12345');
console.log(response);
} catch (error) {
console.error(error);
} finally {
setLoading(false);
}
}
Axios Minimal Example
I think it would be bad to keep the connection open waiting for the response for couple of minutes.
Instead, I would recommend SignalR server side notifications (or equivalent) to notify front end about tasks updates.
Notification DTO would contain all needed information about the task.
Backend:
// Post method
void startTask(params) {
// start backend processing
// after completion notify
signalRHub.notify();
}
On front end you just need subscribe to notifications and add handlers for them.

Google Apps Script: I want to display Script Property in the client-side code, but its value is undefined [duplicate]

I am trying to write a Google Apps script which has a client and server side component. The client side component displays a progress bar. The client calls server side functions (which are called asynchronously), whose progress has to be shown in the client side progress-bar. Now, what I want is to be able to update the client side progress bar based on feedback from the server side functions. Is this possible?
The complexity is created due the the fact that JS makes the server-side calls asynchronously and hence I cannot really have a loop on the client side calling the functions and updating the progress bar.
I could of course split up the execution of the server side function in multiple steps, call one by one from the client side, each time updating the status bar. But I'm wondering if there's a better solution. Is there a way to call a client side function from the server side, and have that update the progress bar based on the argument passed? Or is there a way to access the client side progress-bar object from server side and modify it?
The way I've handled this is to have a middleman (giving a shout out now to Romain Vialard for the idea) handle the progress: Firebase
The HTML/client side can connect to your Firebase account (they're free!) and "watch" for changes.
The client side code can update the database as it progresses through the code - those changes are immediately fed back to the HTML page via Firebase. With that, you can update a progress bar.
Romain has a small example/description here
The code I use:
//Connect to firebase
var fb = new Firebase("https://YOUR_DATABASE.firebaseio.com/");
//Grab the 'child' holding the progress info
var ref = fb.child('Progress');
//When the value changes
ref.on("value", function(data) {
if (data.val()) {
var perc = data.val() * 100;
document.getElementById("load").innerHTML = "<div class='determinate' style='width:" + perc + "%\'></div>";
}
});
On the client side, I use the Firebase library to update the progress:
var fb = FirebaseApp.getDatabaseByUrl("https://YOUR_DATABASE..firebaseio.com/");
var data = { "Progress": .25};
fb.updateData("/",data);
Rather than tying the work requests and progress updating together, I recommend you separate those two concerns.
On the server side, functions that are performing work at the request of the client should update a status store; this could be a ScriptProperty, for example. The work functions don't need to respond to the client until they have completed their work. The server should also have a function that can be called by the client to simply report the current progress.
When the client first calls the server to request work, it should also call the progress reporter. (Presumably, the first call will get a result of 0%.) The onSuccess handler for the status call can update whatever visual you're using to express progress, then call the server's progress reporter again, with itself as the success handler. This should be done with a delay, of course.
When progress reaches 100%, or the work is completed, the client's progress checker can be shut down.
Building on Jens' approach, you can use the CacheService as your data proxy, instead of an external service. The way that I've approached this is to have my "server" application generate an interim cache key which it returns to the "client" application's success callback. The client application then polls this cache key at an interval to see if a result has been returned into the cache by the server application.
The server application returns an interim cache key and contains some helper functions to simplify checking this on the client-side:
function someAsynchronousOperation() {
var interimCacheKey = createInterimCacheKey();
doSomethingComplicated(function(result) {
setCacheKey(interimCacheKey, result);
});
return interimCacheKey;
}
function createInterimCacheKey() {
return Utilities.getUuid();
}
function getCacheKey(cacheKey, returnEmpty) {
var cache = CacheService.getUserCache();
var result = cache.get(cacheKey);
if(result !== null || returnEmpty) {
return result;
}
}
function setCacheKey(cacheKey, value) {
var cache = CacheService.getUserCache();
return cache.put(cacheKey, value);
}
Note that by default getCacheKey doesn't return. This is so that google.script.run's successHandler doesn't get invoked until the cache entry returns non-null.
In the client application (in which I'm using Angular), you call off to the asynchronous operation in the server, and wait for its result:
google.script.run.withSuccessHandler(function(interimCacheKey) {
var interimCacheCheck = $interval(function() {
google.script.run.withSuccessHandler(function(result) {
$interval.cancel(interimCacheCheck);
handleSomeAsynchronousOperation(result);
}).getCacheKey(interimCacheKey, false);
}, 1000, 600); // Check result once per second for 10 minutes
}).someAsynchronousOperation();
Using this approach you could also report progress, and only cancel your check after the progress reaches 100%. You'd want to eliminate the interval expiry in that case.

Service Worker not caching API content on first load

I've created a service worker enabled application that is intended to cache the response from an AJAX call so it's viewable offline. The issue I'm running into is that the service worker caches the page, but not the AJAX response the first time it's loaded.
If you visit http://ivesjames.github.io/pwa and switch to airplane mode after the SW toast it shows no API content. If you go back online and load the page and do it again it will load the API content offline on the second load.
This is what I'm using to cache the API response (Taken via the Polymer docs):
(function(global) {
global.untappdFetchHandler = function(request) {
// Attempt to fetch(request). This will always make a network request, and will include the
// full request URL, including the search parameters.
return global.fetch(request).then(function(response) {
if (response.ok) {
// If we got back a successful response, great!
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// First, store the response in the cache, stripping away the search parameters to
// normalize the URL key.
return cache.put(stripSearchParameters(request.url), response.clone()).then(function() {
// Once that entry is written to the cache, return the response to the controlled page.
return response;
});
});
}
// If we got back an error response, raise a new Error, which will trigger the catch().
throw new Error('A response with an error status code was returned.');
}).catch(function(error) {
// This code is executed when there's either a network error or a response with an error
// status code was returned.
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// Normalize the request URL by stripping the search parameters, and then return a
// previously cached response as a fallback.
return cache.match(stripSearchParameters(request.url));
});
});
}
})(self);
And then I define the handler in the sw-import:
<platinum-sw-import-script href="scripts/untappd-fetch-handler.js">
<platinum-sw-fetch handler="untappdFetchHandler"
path="/v4/user/checkins/jimouk?client_id=(apikey)&client_secret=(clientsecret)"
origin="https://api.untappd.com">
</platinum-sw-fetch>
<paper-toast id="caching-complete"
duration="6000"
text="Caching complete! This app will work offline.">
</paper-toast>
<platinum-sw-register auto-register
clients-claim
skip-waiting
base-uri="bower_components/platinum-sw/bootstrap"
on-service-worker-installed="displayInstalledToast">
<platinum-sw-cache default-cache-strategy="fastest"
cache-config-file="cache-config.json">
</platinum-sw-cache>
</platinum-sw-register>
Is there somewhere I'm going wrong? I'm not quite sure why it works on load #2 instead of load #1.
Any help would be appreciated.
While the skip-waiting + clients-claim attributes should cause your service worker to take control as soon as possible, it's still an asynchronous process that might not kick in until after your AJAX request is made. If you want to guarantee that the service worker will be in control of the page, then you'd need to either delay your AJAX request until the service worker has taken control (following, e.g., this technique), or alternatively, you can use the reload-on-install attribute.
Equally important, though, make sure that your <platinum-sw-import-script> and <platinum-sw-fetch> elements are children of your <platinum-sw-register> element, or else they won't have the intended effect. This is called out in the documentation, but unfortunately it's just a silent failure at runtime.

GameMaker runner crashes when making HTTP requests

I recently got back into using GameMaker:Studio, and hoo boy have there been some massive updates since I last used it! In fact the last time I used it they only had Windows and HTML5 as export options...
Anyway, eager to try out some of the new stuff, I decided to take a shot at the native HTTP functions, since they looked very promising.
I did a test using http_post_string() to great effect, sending a JSON string to my server and getting a JSON string back. The returned string actually represented an object with a single property, "echo", which contained the HTTP request that had been made, just to see what GM:S was sending.
I didn't like that it sent Content-Type: application/x-www-form-urlencoded when it was quite clearly JSON, and I wanted the ability to set my own User Agent string so that the server could know which game was talking to it without having to pass an extra parameter.
So I re-created the same request using the lower-level http_request() function. Everything looked fine, so I tested it.
It crashed. Like, no error messages or anything, just a total crash and Windows had to force-close it.
So here I am with code that by all rights should work fine, but crashes when run...
///send_request(file,ds_map_data,callback_event_id)
var request = ds_map_create();
request[? "instance"] = id;
request[? "event"] = argument2;
if( !instance_exists(obj_ajax_callback)) {
instance_create(0,0,obj_ajax_callback);
}
var payload = json_encode(argument1);
var headers = ds_map_create();
headers[? "Content-Length"] = string_length(payload);
headers[? "Content-Type"] = "application/json";
headers[? "User-Agent"] = obj_ajax_callback.uastring;
var xhr = http_request("https://example.com/"+argument0,"POST",headers,payload);
with(obj_ajax_callback) {
active_callbacks[? xhr] = request;
}
ds_map_destroy(headers);
obj_ajax_callback is an object that maintains a ds_map of active requests, and in its HTTP event it listens for those requests' callbacks and reacts along the lines of with(request[? "instance"]) event_user(request[? "event"]) so that the calling object can handle the response. This hasn't changed from the fully working http_post_string() attempt.
Any idea what could be causing this crash?
The reason why this crashes is because you are sending the Content-Length header as a real instead of a string. If you change your line to
headers[? "Content-Length"] = string(string_length(payload));
It should work.

WebAPI and HTML5 SSE

was trying to encapsulate a partial view to show feedback that i can push back to the client.
This Article shows a method of pushing back data using HTML5 Server-Sent events (SSE).
I noticed that if i opened up several browser tabs and then closed one i got exceptions as the logic didn't remove the respective stream from the ConcurrentQueue. I amended the code as below
private static void TimerCallback(object state)
{
StreamWriter data;
Random randNum = new Random();
// foreach (var data in _streammessage)
for (int x = 0; x < _streammessage.Count; x++)
{
_streammessage.TryDequeue(out data);
data.WriteLine("data:" + randNum.Next(30, 100) + "\n");
try
{
data.Flush();
_streammessage.Enqueue(data);
}
catch (Exception ex)
{
// dont re-add the stream as an error ocurred presumable the client has lost connection
}
}
//To set timer with random interval
_timer.Value.Change(TimeSpan.FromMilliseconds(randNum.Next(1, 3) * 500), TimeSpan.FromMilliseconds(-1));
}
I also had to amend the OnStreamAvailable member as the framework syntax had changed to the second parameter being a HttpContent rather than HttpContentHeaders
public static void OnStreamAvailable(Stream stream, HttpContent headers, TransportContext context)
The problem now is i am still getting inconsistant behaviour if i add or remove clients i.e it times out when trying to initialise a new client. Does anyone have any ideas or more examples of using SSE with WinAPI and the correct "framework of methods" to handle disconnected clients
Cheers
Tim
This article is actually an adaptation of my original article from May - http://www.strathweb.com/2012/05/native-html5-push-notifications-with-asp-net-web-api-and-knockout-js/ (notice even variable names and port numbers are the same :-).
It is a very valid point that you are raising, and detecting a broken connection is something that's not very easy with this setup. The main reason is that while ASP.NET (the host) allows you to check a broken connection, there is no notification mechanism between ASP.NET (host) and Web API informing about that.
That is why in order to detect a broken connection (disconnected client) you should really try writing to the stream, and catch any error - this would mean the client has been disconnected.
I asked the same question to Brad Wilson/Marcin Dobosz/Damien Edwards at aspconf, and Damien suggested using HttpContext.Current.Response.IsClientConnected - so basically bypassing Web API and obtaining the connectivity info from the underlying host directly (however there is still a race condition involved anyway). That is really .NET 4. He also pointed an interesting way in which this problem could be avoided in .NET 4.5 using an async cancellation token. Frankly, I have never got around to test it, but perhaps this is something you should explore.
You can see their response to this problem in this video - http://channel9.msdn.com/Events/aspConf/aspConf/Ask-The-Experts - fast forward to 48:00