Bayeux protocol and how it supports multiple tabs opened in a single browser - comet

My question is regarding how Bayeux protocol is making it possible to have multiple tabs opened in a single browser. If we use publish/subscribe paradigm also, we need to send request to server for subscribing then will that connection be opened? If opened then how is it preventing the connection limit. If the connection is not opened then how does the server send the data to multiple tabs.

The HTTP standard connection limit is recommended to be 2, but that is only a recommendation. No modern browsers actually impose a 2 connection limit anymore.
However, to address this the Bayeux protocol also recommends that applications use cookies to detect when multiple tabs are open and prompt the user to close all but one.
http://svn.cometd.com/trunk/bayeux/bayeux.html
It is RECOMMENDED that Bayeux client implementations use client side persistence or cookies to detect multiple intances of Bayeux clients running within the same HTTP client. Once detected, the user MAY be offered the option to disconnect all but one of the clients. It MAY be possible for client implementations to use client side persistence to share a Bayeux client instance.

The updated Bayeux specification is at http://docs.cometd.org/reference/#bayeux.
The handling of multiple clients from the same browser is discussed in the CometD reference at http://docs.cometd.org/reference/#java_server_multiple_sessions.

Related

How to force browser to send all requests using only one connection (socket)

I have an embedded product, and unfortunately due to limited resources it can handle only one SSL connection at a time. Browsers try to open more, which I have to refuse.
Is there any way to force browsers to use only one connection for all of their requests?

Web browsers assume that my HTTP server is prepared to accept many connections

I'm developing a web server and application on a microcontroller where resources (especially RAM) are very limited. When I point Chrome or Firefox to the web page hosted by my embedded web server, it attempts to establish a total of 6 concurrent TCP connections. First it opens one and loads the main HTML, then it attempts to open 5 more for loading various resources.
My server only has resources to handle 3 concurrent connections. Currently the device is programmed to refuse further connections by sending an RST packet in response to the SYN packets. So the first 3 SYN packets get a normal SYN-ACK reply and HTTP traffic starts, the latter 3 get an RST.
Both Chrome and Firefox seem to decide that the RST responses are fatal and abandon loading certain resources.
If the device does not send these RST responses (just forgets about the SYNs), Chrome loads the page fine. But I don't like the zombie connection attempts on the client.
Should browsers really be assuming the RST responses to connection attempts are fatal? I was under the impression that an HTTP server is allowed to close the connection at any time and the client should retry at least GET requests transparently.
What is the best solution, practically? Keep in mind that perhaps I would like to support multiple web clients with for example 4 connections in total, and if the first client grabs all 4, there are none left for the second client.
Note that for my application there is zero benefit of having parallel connections. Why must I support so many connections just because the client thinks it will be faster? Even if I manage to support 6 now, what when the browser vendors decide to increase the default and break my application?
EDIT - I see the same issue with Firefox as well not just Chrome.
Indeed modern browsers will try to use 6 connections, in some cases even 8. You have one of two options:
Just ACK but take your time replying
Use javascript to load your resources one-by-one
I am assuming here that you can't increase the concurrent capacity of the server (being a small device) or radically change the appearance of the page.
Option #2 removes most of the resources from the page and instead has JS programatically request every resource and add them to the page via the DOM. This might be a serious rework of the page.
I should also mention that you can inline images (the image bitmap is just a string in the page) so that you can prevent the (mostly) parallel fetching of images done by modern browsers.
I was under the impression that an HTTP server is allowed to close the connection at any time and the client should retry at least GET requests transparently.
The server is allowed to close the connection after the first response was sent, i.e. it might ignore the wish of the client to keep the connection open. The server is not allowed to close the connection within or before the first request was handled.
What is the best solution, practically?
Don't use too much resources which need to be retrieved in separate requests. Use data-URL's and similar. Or increase your listen queue to accept more than 3 TCP connections at the same time.

Get client to act as server with websocket?

I am basically writing an almost purely clientside application (there is a webserver which can be used to store some persistent data, but its easier to forget about it), but as part of this I was looking to add some functionality akin to hosting a game.
The scenario would be 1 person would host the game via their browser (open a TCP socket awaiting connections), then X other people would connect to that server and join. The server would be in charge of receiving and sending data between clients.
So in this scenario is it possible to host a websocket server within a webpage?
I was looking at trying to do something peer to peer style, but I don't think it is currently supported, but its not a major problem as its only going to be for sending small amounts of text and some update messages between clients.
The WebSocket browser API is client only (for the foreseeable future).
In some sense, WebRTC us peer-to-peer, but even if the WebRTC API adds the ability to send arbitrary data, you still need a STUN/TURN server to establish the initial connection.

Delphi 2007 Web Service Server side application

My web service server side application serves the stored procedures for the request from different users. I am opening and closing the ADO Connection for each request. Is it advisable or can any one suggest a better method? And help me in session management.
Thanks in advance.
ADO supports connection pooling, so enable it on your Connectionstring property of TADOConnection
Creating a new connection from blank can be very time consuming (e.g. more than 1 second with a remote Oracle connection), therefore is to be avoided like hell for a service application.
IMHO the best solution (from the performance POV) is to maintain one DB connection per server thread. So it will depend on your HTTP service implementation.
Connection pooling is also available if you don't want to deal with threads, as Mohammed wrote in his answer.
Consider also using server-side caching of answers. If you know that the result will be consistent, you should better cache it on the server side and share it among clients. Of course, this is worth developing only if client requests may share.
About session management, what do you want to know? I guess this is about Client sessions. For a web service, the main usage is to implement a session via cookies. See this SO answer about authentication of a web service, for other possibilities. IMHO a RESTful approach (i.e. stateless) is worth considering for a web service.

JSON Asynchronous Application server?

First let me explain the data flow I need
Client connects and registers with server
Server sends initialization JSON to client
Client listens for JSON messages sent from the server
Now all of this is easy and straightforward to do manually, but I would like to leverage a server of some sort to handle all of the connection stuff, keep-alive, dead clients, etc. etc.
Is there some precedent set on doing this kind of thing? Where a client connects and receives JSON messages asynchronously from a server? Without using doing manual socket programming?
A possible solution is known as Comet, which involves the client opening a connection to the server that stays open for a long time. Then the server can push data to the client as soon as it's available, and the client gets it almost instantly. Eventually the Comet connection times out, and another is created.
Not sure what language you're using but I've seen several of these for Java and Scala. Search for comet framework and your language name in Google, and you should find something.
In 'good old times' that would be easy, since at the first connection the server gets the IP number of the client, so it could call back. So easy, in fact, that it was how FTP does it for no good reason.... But now we can be almost certain that the client is behind some NAT, so you can't 'call back'.
Then you can just keep the TCP connection open, since it's bidirectional, just make the client wait for data to appear. The server would send whatever it wants whenever it can.... But now everybody wants every application to run on top of a web browser, and that means HTTP, which is a strictly 'request/response' initiated by the client.
So, the current answer is Comet. Simply put, a JavaScript client sends a request, but the server doesn't answer for a looooong time. if the connection times out, the client immediately reopens it, so there's always one open pipe waiting for the server's response. That response will contain whatever message the server want's to send to the client, and only when it's pertinent. The client receives it, and immediately sends a new query to keep the channel open.
The problem is that HTTP is a request response protocol. The server cannot send any data unless a requests is submitted by the client.
Trying to circumvent this by macking a request and then continously send back responses on the same, original, requests is flawed as the behavior does not conform with HTTP and it does not play well with all sort of intermediaries (proxies, routers etc) and with the browser behavior (Ajax completion). It also doesn't scale well, keeping a socket open on the server is very resource intensive and the sockets are very precious resources (ordinarly only few thousand available).
Trying to circumvent this by reversing the flow (ie. server connects to the client when it has somehting to push) is even more flawed because of the security/authentication problems that come with this (the response can easily be hijacked, repudiated or spoofed) and also because often times the client is unreachable (lies behind proxies or NAT devices).
AFAIK most RIA clients just poll on timer. Not ideal, but this how HTTP works.
GWT provides a framework for this kind of stuff & has integration with Comet (at least for Jetty). If you don't mind writing at least part of your JavaScript in Java, it might be the easier approach.