I have builted a web app. I send and receive a lot of data using Websockets and each time I have to open and close a Websocket connection.
Why dont avoid the constant open/close? How about when the page loads, Websockets are created and opened and they never close, so I use the same Websockets to send and receive text, arrays, links, search queries etc. I am even thinking about transfering files like images and/or videos via Websockets.
Can I do this , or do I have to close a WS connection after I am done? Will never-closing WS rise a security issue? Plus I dont know if the WS will actually close when the user leaves the page. If it does not, I guess that is another security issue, right there.
How do I transfer files via WS? I cannot imagine how to do this
Thanks in advance
Websockets are meant to remain open for the lifetime of the webpage or SPA... it's totally expected normal behavior.
The server might close the websocket at any time and this is also normal behavior - just re-open the websocket.
Normally, servers will only close the websocket if the websocket was idle for a some time (i.e. Heroku set the timeout limit to 50 seconds) or for traffic and concurrency considerations. Otherwise, the websocket connection could remain open for all time.
For example, the Plezi framework (Ruby) sends a pong frame automatically every once in a while, so the connection will remain open indefinitely unless the browser closes the socket (usually by exiting the page).
Related
I have a Nodejs application which is using Mysql as a database, express and passport to manage user authentication. There can be 20-30 users connected to my Nodejs application at once time.
Now, there are certain pages in my application where multiple users can work on the same stuff at once. So if one user changes the value of the field, the other user will also see that change. As of right now to achieve this I am just using a Setinterval function that is running every 5 seconds with an ajax request post to the Nodejs server and then redraw the user field if necessary. This is working fine till now, but now I have decided, I wanted other pages in my application that I want to work this way. This means there will be multiple post backs happening to my Nodejs server every seconds to run mysql query. I am kind of new to Nodejs and I am not sure if this is an optimal way to handle this situation.
I was wandering if there is a way to send new field data to client, without client request and redrawing the DOM for them.
There are two solutions built-into the browser for the server to send data directly to a connected client.
webSocket connections
Server sent events
With each of these technologies, the client establishes one of these two types of connections on any given web page and then the server is able to send the client data whenever it wants.
webSockets are two way communication channels. Server sent events are one-way (data sent from server to client). Server sent events were designed to be a bit more efficient, but are more limited in what they can do.
It's important to realize that the lasting connection between client and server is only for the duration of that current page in the browser. If the end-user switches to another web page (even another page on your site), then the browser will close your current connection. If that new web page wants a similar connection, then it establishes a new connection on the new page.
With these types of connections from browser to server, your server then keeps track of each connection and some identifying information for each connection (like a username or userID). Then, when something changes in the data on the server, your server can figure out which clients should be notified of that change and send that new data over their connection. The client then receives that data and updates the visuals of the webpage using Javascript (displaying new data, updating status, etc...).
FYI, there is also a popular library called socket.io that works on top of webSocket and adds a number of useful features outlined here (such as connection failure detection, auto-reconnect, message passing layer, etc...). You would use the socket.io library in both client and server to add these features.
I'm developing a web server and application on a microcontroller where resources (especially RAM) are very limited. When I point Chrome or Firefox to the web page hosted by my embedded web server, it attempts to establish a total of 6 concurrent TCP connections. First it opens one and loads the main HTML, then it attempts to open 5 more for loading various resources.
My server only has resources to handle 3 concurrent connections. Currently the device is programmed to refuse further connections by sending an RST packet in response to the SYN packets. So the first 3 SYN packets get a normal SYN-ACK reply and HTTP traffic starts, the latter 3 get an RST.
Both Chrome and Firefox seem to decide that the RST responses are fatal and abandon loading certain resources.
If the device does not send these RST responses (just forgets about the SYNs), Chrome loads the page fine. But I don't like the zombie connection attempts on the client.
Should browsers really be assuming the RST responses to connection attempts are fatal? I was under the impression that an HTTP server is allowed to close the connection at any time and the client should retry at least GET requests transparently.
What is the best solution, practically? Keep in mind that perhaps I would like to support multiple web clients with for example 4 connections in total, and if the first client grabs all 4, there are none left for the second client.
Note that for my application there is zero benefit of having parallel connections. Why must I support so many connections just because the client thinks it will be faster? Even if I manage to support 6 now, what when the browser vendors decide to increase the default and break my application?
EDIT - I see the same issue with Firefox as well not just Chrome.
Indeed modern browsers will try to use 6 connections, in some cases even 8. You have one of two options:
Just ACK but take your time replying
Use javascript to load your resources one-by-one
I am assuming here that you can't increase the concurrent capacity of the server (being a small device) or radically change the appearance of the page.
Option #2 removes most of the resources from the page and instead has JS programatically request every resource and add them to the page via the DOM. This might be a serious rework of the page.
I should also mention that you can inline images (the image bitmap is just a string in the page) so that you can prevent the (mostly) parallel fetching of images done by modern browsers.
I was under the impression that an HTTP server is allowed to close the connection at any time and the client should retry at least GET requests transparently.
The server is allowed to close the connection after the first response was sent, i.e. it might ignore the wish of the client to keep the connection open. The server is not allowed to close the connection within or before the first request was handled.
What is the best solution, practically?
Don't use too much resources which need to be retrieved in separate requests. Use data-URL's and similar. Or increase your listen queue to accept more than 3 TCP connections at the same time.
I am making a webserver with scapy, which is going pretty well. However, it's a pain in the butt for scapy to maintain different connections at the same time. So I want the client to make a persistent connection with the webserver that servers a html page with an image.
I have the client succesfully iniating a tcp handshake and obtaining the html page, however, it opens a new connection to download the image. Which I do not want.
I understand that in HTTP/1.1 it is not necessary to send the keep-alive header, as it's a default. How come Chrome and Firefox still open more connections to download seperate files?
I am not sending a Connection: close header whatsoever, so I think it's weird that they do not maintain the same connection for all files on the webpage.
EDIT: Tried to use the actual Keep-Alive: timeout=n, max = n header. Still no result.
What could be the problem? Feel free to ask for details!
Persistent connections do not forbid to use parallel connections, they only allow to re-use the same connection for more requests. But, with persistent connections you can only do multiple requests within the same connection one after the other. This means to get lots of resources it is usually faster to open multiple connections in parallel and use each of these connections to get multiple resources, e.g. using 4 connections in parallel to get 12 images (3 images with each connection) is faster then getting all the 12 images one after the other using a single connection.
I have one app. that consists of "Manager" and "Worker". Currently, the worker always initiates the connection, says something to the manager, and the manager will send the response.
Since there is a LOT of communication between the Manager and the Worker, I'm considering to have a socket open between the two and do the communication. I'm also hoping to initiate the interaction from both sides - enabling the manager to say something to the worker whenever it wants.
However, I'm a little confused as to how to deal with "collisions". Say, the manager decides to say something to the worker, and at the same time the worker decides to say something to the manager. What will happen? How should such situation be handled?
P.S. I plan to use Netty for the actual implementation.
"I'm also hoping to initiate the interaction from both sides - enabling the manager to say something to the worker whenever it wants."
Simple answer. Don't.
Learn from existing protocols: Have a client and a server. Things will work out nicely. Worker can be the server and the Manager can be a client. Manager can make numerous requests. Worker responds to the requests as they arrive.
Peer-to-peer can be complex with no real value for complexity.
I'd go for a persistent bi-directional channel between server and client.
If all you'll have is one server and one client, then there's no collision issue... If the server accepts a connection, it knows it's the client and vice versa. Both can read and write on the same socket.
Now, if you have multiple clients and your server needs to send a request specifically to client X, then you need handshaking!
When a client boots, it connects to the server. Once this connection is established, the client identifies itself as being client X (the handshake message). The server now knows it has a socket open to client X and every time it needs to send a message to client X, it reuses that socket.
Lucky you, I've just written a tutorial (sample project included) on this precise problem. Using Netty! :)
Here's the link: http://bruno.linker45.eu/2010/07/15/handshaking-tutorial-with-netty/
Notice that in this solution, the server does not attempt to connect to the client. It's always the client who connects to the server.
If you were thinking about opening a socket every time you wanted to send a message, you should reconsider persistent connections as they avoid the overhead of connection establishment, consequently speeding up the data transfer rate N-fold.
I think you need to read up on sockets....
You don't really get these kinds of problems....Other than how to responsively handle both receiving and sending, generally this is done through threading your communications... depending on the app you can take a number of approaches to this.
The correct link to the Handshake/Netty tutorial mentioned in brunodecarvalho's response is http://bruno.factor45.org/blag/2010/07/15/handshaking-tutorial-with-netty/
I would add this as a comment to his question but I don't have the minimum required reputation to do so.
If you feel like reinventing the wheel and don't want to use middleware...
Design your protocol so that the other peer's answers to your requests are always easily distinguishable from requests from the other peer. Then, choose your network I/O strategy carefully. Whatever code is responsible for reading from the socket must first determine if the incoming data is a response to data that was sent out, or if it's a new request from the peer (looking at the data's header, and whether you've issued a request recently). Also, you need to maintain proper queueing so that when you send responses to the peer's requests it is properly separated from new requests you issue.
First let me explain the data flow I need
Client connects and registers with server
Server sends initialization JSON to client
Client listens for JSON messages sent from the server
Now all of this is easy and straightforward to do manually, but I would like to leverage a server of some sort to handle all of the connection stuff, keep-alive, dead clients, etc. etc.
Is there some precedent set on doing this kind of thing? Where a client connects and receives JSON messages asynchronously from a server? Without using doing manual socket programming?
A possible solution is known as Comet, which involves the client opening a connection to the server that stays open for a long time. Then the server can push data to the client as soon as it's available, and the client gets it almost instantly. Eventually the Comet connection times out, and another is created.
Not sure what language you're using but I've seen several of these for Java and Scala. Search for comet framework and your language name in Google, and you should find something.
In 'good old times' that would be easy, since at the first connection the server gets the IP number of the client, so it could call back. So easy, in fact, that it was how FTP does it for no good reason.... But now we can be almost certain that the client is behind some NAT, so you can't 'call back'.
Then you can just keep the TCP connection open, since it's bidirectional, just make the client wait for data to appear. The server would send whatever it wants whenever it can.... But now everybody wants every application to run on top of a web browser, and that means HTTP, which is a strictly 'request/response' initiated by the client.
So, the current answer is Comet. Simply put, a JavaScript client sends a request, but the server doesn't answer for a looooong time. if the connection times out, the client immediately reopens it, so there's always one open pipe waiting for the server's response. That response will contain whatever message the server want's to send to the client, and only when it's pertinent. The client receives it, and immediately sends a new query to keep the channel open.
The problem is that HTTP is a request response protocol. The server cannot send any data unless a requests is submitted by the client.
Trying to circumvent this by macking a request and then continously send back responses on the same, original, requests is flawed as the behavior does not conform with HTTP and it does not play well with all sort of intermediaries (proxies, routers etc) and with the browser behavior (Ajax completion). It also doesn't scale well, keeping a socket open on the server is very resource intensive and the sockets are very precious resources (ordinarly only few thousand available).
Trying to circumvent this by reversing the flow (ie. server connects to the client when it has somehting to push) is even more flawed because of the security/authentication problems that come with this (the response can easily be hijacked, repudiated or spoofed) and also because often times the client is unreachable (lies behind proxies or NAT devices).
AFAIK most RIA clients just poll on timer. Not ideal, but this how HTTP works.
GWT provides a framework for this kind of stuff & has integration with Comet (at least for Jetty). If you don't mind writing at least part of your JavaScript in Java, it might be the easier approach.