Will the Websocket Api implementation of HTML5 make node.js irrelevant? If not, what are the differences between the two ans where to use what?
Node.js is a server side, event based, asynchronous I/O framework.
HTML5 WebSockets a pretty much TCP sockets in the Browser, that's all they don't di much more then establishing a two way channel of communication.
For example, you would write your game Server with Node.js and then use WebSockets to communicate between your Browser based client and the server.
An example of such a Game (disclaimer, I'm the author of the project):
http://github.com/BonsaiDen/NodeGame-Shooter
To get an idea what Node.js does, I recommend that you watch some talks that are listed on our Node.js tag wiki.
Actually, WebSockets makes Node far more applicable; a good number of the interesting server side implementations of WebSockets use Node.
In fact, a library that is growing very fast in popularity is Socket.IO. Socket.IO is a server-side (Node) and client side library that allows you to rapidly create interactive web applications. The client and server coordinate to pick the best communication mechanism available to both (WebSockets is the preference but falls back to long-polling). The client and server side Javascript library interface is very similar (and both are Javascript) so it's very easy to create web applications quickly.
Related
Is there a way to stream a video and audio on a website just to the clients, using a camera installed on the server - for instance, like youtube does ?
I've started reading webrtc, but if I use webrtc I should create a stun/turn server and other things, which for one way stream I think is not necessary (this is just my understanding of the things..) because I don't need anything from the clients, literally, neither their video, or audio..
So is there a way to achieve this using html5, streaming just in one direction:
server (camera) -> clients
Is there something about this out there, or should I stick with webrtc ?
I'm going to explain a possible solution for this scenario, there might be others, but I hope mine gives you a rough idea of how you could do it and a start point to explore more about the amazing possibilities of WebRTC. Please let me know if something is not understood.
So, WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. Sweet, that is: WebRTC has a quite good browser support (not in every browser though, Safari just started supporting it a month ago with Safari 11). But in this case we want to use WebRTC in the server side. At the end of the day we can still think about peer-to-peer real time communication, where one of our peers is the server.
I don't know if you are familiar with Node.js, but I recommend you to write your Server app with it (<3 Javascript!):
There are a few libraries that wrap WebRTC functionality to be used in the server side, like node-webrtc and node-rtc-peer-connection.
But I recommend you to to take a look at electron-werbrtc, since
the others might be using deprecated methods or be incomplete.
electron-webrtc runs a headless Electron client in the background to
use Chromium's built-in WebRTC implementation. So with it you should
be able to access the Camera in your server and create a stream to be
served to the other peer (the browser).
All above would be the WebRTC related tasks, in this case: streaming video peer(server)-to-peer(browser).
Now, let's talk about signaling process, stun and turn.
Signaling: imagine now a scenario peer-to-peer with 2 browsers, they want to establish a direct connection and stream video and audio between each other. But they don't know each other, like if I don't know your home address, I can't send you a letter. So they need a service that helps them know each other, so they can have the other's IP. This should be done by what is called "a signaling server". If somehow you know the other peer IP, you wouldn't need a signaling server.
STUN/TURN: the scheme above works perfectly in a local area network where each peer has its own IP address and there are no firewalls and routers between them. But otherwise, you can have peers behind a NAT or firewalls, and then your signaling server won't be able to make both peers to discover themselves. If you have peers behind a NAT, you'll need a STUN server, and if you have peers behind firewalls you'll need a TURN server. This is a bit simplified, but I just want you to have the general picture of when you might need STUN/TURN servers.
To better understand Signaling, STUN and TURN, there is a very graphic article that explains them perfectly.
Now, for your scenario:
I think you prob don't need STUN/TURN servers and also you prob don't need to implement the signaling process, because the browsers that are supposed to receive the stream from the server will know that server address, right? So they can establish a WebRTC connection with it.
EDIT: it is likely that you will need to implement some sort of handshake between the server and the clients (browsers), so this will be the signaling process. This is not part of WebRTC and this is why you need to implement it yourself. As I said, it is the way 2 peers can discover each other, but they also exchange information as their local media conditions, like codecs, resolutions they can handle, etc. For your case, your signaling server could be hosted in the same server you use to strea: you can build a small node.js app that runs there and that manages all the signaling process easily, it is not a big deal. I recommend you to read this article, and specially the section "How can I build a signaling service?". In general all WebRTC articles from that site are very helpful.
Does this make sense to you? I think with it you can start digging a little bit more and see if with this is enough or you need to implement more stuff. Hope it helps!
I've written a socket server using Node.JS which works pretty well, but I'd rather use GAE for live production so that I get the warm fuzzy Google feeling.
On the client side, I'm using Flash->AIR (Desktop, iOS, and Android).
Question is, it seems that GAE does not support inbound sockets exactly. For Go, There's the Channel API and the XMPP API.
Channel API: Only works with Javascript clients, not Actionscript.
XMPP API: It seems that the most up to date Actionscript API out there is from XIFF... documentation is sparse, website is convoluted, and all-in-all it does not inspire trust at the level I'm looking for. On top of this, I simply want a socket server to pass basic messages back and forth, with an understanding of "users" and "channels". XMPP seems to be quite a lot more than that, and therefore more moving parts that can go wrong.
What is the standard answer to someone who's looking to build a custom socket server using a service like GAE, and not using Javascript clients? Just use XMPP and pray, or some other angle here I'm not seeing?
What is the difference between a simple Async servlet and the Comet / Bayeux protocol?
I am trying to implement a "Server Push" (or "Reverse Ajax") kind of webpage that will receive updates from the server as and when events occur on the server. So even without the client explicitly sending a request, I need the server to be able to send responses to the specific client browser.
I understand that Comet is the umbrella term for these kind of technologies; with 'Bayeux' being the protocol. But when I looked through the servlet spec, even the 'Async servlet' seems to accomplish the same thing. I mean I can define a simple servlet with the
<async-supported>
attribute set to true in the web.xml; and that servlet will be able to asynchronously send responses to the client. I can then have a jQuery or ExtJS based ajax client that just keeps doing a
long_polling()
call into the servlet. Something like what is described in the link below
http://www.ibm.com/developerworks/web/library/wa-reverseajax1/index.html#long
So my question is this:
What is the difference between a simple Async servlet and the Comet / Bayeux protocol?
Thanks
It is true that "Comet" is the term for these technologies, but the Bayeux protocol is used only by few implementations. A Comet technique can use any protocol it wants; Bayeux is one of them.
Having said that, there are two main differences between an async servlet solution and a Comet+Bayeux solution.
The first difference is that the Comet+Bayeux solution is independent of the protocol that transports Bayeux.
In the CometD project, there are pluggable transports for both clients and servers that can carry Bayeux.
You can carry it using HTTP, with Bayeux being the content of a POST request, but you can also carry it using WebSocket, with Bayeux being the payload of the WebSocket message.
If you use async servlets, you cannot leverage WebSocket, which is way more efficient than HTTP.
The second difference is that async servlets only carry HTTP, and you need more than that to handle remote Comet clients.
For example, you may want to identify uniquely the clients, so that 2 tabs for the same page result in 2 different clients. To do this, you need add a "property" to the async servlet request, let's call it sessionId.
Next, you want to be able to authenticate a client; only authenticated clients can get a sessionId. But to differentiate between first requests to authenticate and others subsequent requests already authenticated, you need another property, say messageType.
Next, you want to be able to notify quickly disconnections due to network loss or other connectivity problems; so you need to come up with a heart-beat solution so that if the heart beats you know the connection is alive, if it does not beat you know it's dead, and perform rescue actions.
Next you need disconnect features. And so on.
Quickly you realize that you're building another protocol on top of HTTP.
At that point, it's better to reuse an existing protocol like Bayeux, and proven solutions like CometD (which is based on Comet techniques using the Bayeux protocol) that gives you:
Java and JavaScript client libraries with simple yet powerful APIs
Java server library to perform your application logic without the need to handle low level details such as HTTP or WebSocket via annotated services
Transport pluggability, both client and server
Bayeux protocol extensibility
Lazy messages
Clustering
Top performance
Future proof: users of CometD before the advent of WebSocket did not change a line of code to take advantage of WebSocket - all the magic was implemented in the libraries
Based on standards
Designed and maintained by web protocols experts
Extended documentation
I can continue, but you get the point :)
You don't want to use a low-level solution that ties you to HTTP only. You want to use a higher level solution that abstracts your application from the Comet technique used and from the protocol that transports Bayeux, so that your application can be written once and leverage future technology improvements. As an example of technology improvement, CometD was working well way before async servlets came into picture, and now with async servlet just became more scalable, and so your application, without the need to change a single line in the application.
By using a higher level solution you can concentrate on your application rather than on the gory details of how to write correctly an async servlet (and it's not that easy as one may think).
The answer to your question could be: you use Comet+Bayeux because you want to stand on the shoulder of giants.
can HTML5 communicate with Java Serversocketchannel?
if possible can anyone tellme the details.
thank you in advance.
I'm assuming that you are talking about WebSockets and not some other protocol (Flash, Java applet and Silverlight native sockets, or XMLHttpRequest connections). WebSockets are an HTTP family spec from IETF and not related directly to HTML5 (though they are both in the extended family of next-gen web standards).
Browser WebSocket implementations can only talk to servers that deliberately support the WebSocket protocol. You can certainly write a server that supports the WebSocket protocol using a ServerSocketChannel, but WebSocket will not be able to connect to an arbitrary service that was written (using ServerSocketChannel or not) without the WebSocket protocol in mind.
This is a deliberate security measure to prevent web browsers being forced to connect to non-web-related services (eg to port 25 to send spam).
If you want to write a WebSocket protocol layer on top of ServerSocketChannel you'll need to put a non-trivial amount of work into implementing the spec. It would seem more sensible to re-use an existing library.
What is a good framework to build a multiplayer game in Actionscript?
I want to create a multiplayer 2D shooter like Asteroids on the Blackberry Playbook; my main concern is latency - a shooter wouldn't be fun if the bullets are super-jerky and unexpectedly hit people.
I'm guessing that a UDP-based framework would be the best. Can anyone point me to the right direction?
There are many things you can use off the shelf but the basic setup is very simple but you have a few options.
The most common is server push, things like Flash Media Server, LiveCycle Data Services from Adobe or other tools like SmartFoxServer can do this. With this setup the server saves the connections to everyone that connects to the server and passes or "pushes" applications state to the people connected every time the data changes in the application.
Another option is called long pulling, this can be done with any web server really. How this works is the data stores the state of the application, when the application starts it calls the server, when it responds the client calls the server again.
There are a few other ways to do it but these are the most common. But this has nothing to do with protocol like HTTP, UDP, AMF, XMPP, or whatever else. The protocol is the format that the data is sent. With these out of the box servers they normally output a few of these but the fastest formats are binary like AMF but not always the best, there are advantages to each, because each gives you different features for keeping track of things.
If you are talking about have a game that takes over the world that has millions of users then you need to think about scaling and what happens when you need two or 100 servers and how do they talk to each other. But for now keep in mind that the more the server does the slower it will get, if you are sending small amounts of data it will be able to handle more users. Stick with making one efficient server and worry about that later if you get there.
You also need to thing about what server side programming language you want to mess with if any. Some services don't let you do anything, these normally cost money and don't do as much. Adobe likes Java but there are servers that output all of these protocols in most every language. My favorit lately has been Node.js a super fast way to run JavaScript on the server. Node.js has a built in HTTP server but it is just as easy to create a simple server that sends basic text through a Socket or XMLSocket. A server like this will easily handle many thousands of users. There are many games that use Socket.IO and if you want to see a simple example of what I'm talking about you can check out this.
Assuming you want to use Flash/Flex and not Java (Blackberry/Android) or native SDKs for Playbook -
There is a book as an inspiration: http://www.packtpub.com/flash-10-multiplayer-game-essentials/book it uses Pulse SDK at the server side. But you could use an own sockets-program on the server side. I use Perl as TCP-sockets server (sends gzipped XML around) in a small card game but this wouldn't work for your shooter.
Flash does not support UDP out of the box
But there is peer-to-peer networking protocol RTMFP in the upcoming Flash Media Server Enterprise 4 (price is out of reach for mere mortals)
So your best bet is to buy an Amazon-service for RTMFP then you can pay-per-use and stay scalable...
You can either do a constant post/get request with the server to get data for the game, but for a multiplayer shooter i'd surgest SmartFoxServer: http://www.smartfoxserver.com/
Out of the box, Adobe AIR supports UDP through datagram packets.
http://help.adobe.com/en_US/air/reference/html/flash/net/DatagramSocket.html
I couldn't find a particular networking API for flash, but perhaps you can build one. Libgren is open source and you can use that for reference.
You can also look into RTMFP though it's focus is on transmitting audio/video and some messages (through TCP I think).