Video and audio stream - server to clients only - html

Is there a way to stream a video and audio on a website just to the clients, using a camera installed on the server - for instance, like youtube does ?
I've started reading webrtc, but if I use webrtc I should create a stun/turn server and other things, which for one way stream I think is not necessary (this is just my understanding of the things..) because I don't need anything from the clients, literally, neither their video, or audio..
So is there a way to achieve this using html5, streaming just in one direction:
server (camera) -> clients
Is there something about this out there, or should I stick with webrtc ?

I'm going to explain a possible solution for this scenario, there might be others, but I hope mine gives you a rough idea of how you could do it and a start point to explore more about the amazing possibilities of WebRTC. Please let me know if something is not understood.
So, WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. Sweet, that is: WebRTC has a quite good browser support (not in every browser though, Safari just started supporting it a month ago with Safari 11). But in this case we want to use WebRTC in the server side. At the end of the day we can still think about peer-to-peer real time communication, where one of our peers is the server.
I don't know if you are familiar with Node.js, but I recommend you to write your Server app with it (<3 Javascript!):
There are a few libraries that wrap WebRTC functionality to be used in the server side, like node-webrtc and node-rtc-peer-connection.
But I recommend you to to take a look at electron-werbrtc, since
the others might be using deprecated methods or be incomplete.
electron-webrtc runs a headless Electron client in the background to
use Chromium's built-in WebRTC implementation. So with it you should
be able to access the Camera in your server and create a stream to be
served to the other peer (the browser).
All above would be the WebRTC related tasks, in this case: streaming video peer(server)-to-peer(browser).
Now, let's talk about signaling process, stun and turn.
Signaling: imagine now a scenario peer-to-peer with 2 browsers, they want to establish a direct connection and stream video and audio between each other. But they don't know each other, like if I don't know your home address, I can't send you a letter. So they need a service that helps them know each other, so they can have the other's IP. This should be done by what is called "a signaling server". If somehow you know the other peer IP, you wouldn't need a signaling server.
STUN/TURN: the scheme above works perfectly in a local area network where each peer has its own IP address and there are no firewalls and routers between them. But otherwise, you can have peers behind a NAT or firewalls, and then your signaling server won't be able to make both peers to discover themselves. If you have peers behind a NAT, you'll need a STUN server, and if you have peers behind firewalls you'll need a TURN server. This is a bit simplified, but I just want you to have the general picture of when you might need STUN/TURN servers.
To better understand Signaling, STUN and TURN, there is a very graphic article that explains them perfectly.
Now, for your scenario:
I think you prob don't need STUN/TURN servers and also you prob don't need to implement the signaling process, because the browsers that are supposed to receive the stream from the server will know that server address, right? So they can establish a WebRTC connection with it.
EDIT: it is likely that you will need to implement some sort of handshake between the server and the clients (browsers), so this will be the signaling process. This is not part of WebRTC and this is why you need to implement it yourself. As I said, it is the way 2 peers can discover each other, but they also exchange information as their local media conditions, like codecs, resolutions they can handle, etc. For your case, your signaling server could be hosted in the same server you use to strea: you can build a small node.js app that runs there and that manages all the signaling process easily, it is not a big deal. I recommend you to read this article, and specially the section "How can I build a signaling service?". In general all WebRTC articles from that site are very helpful.
Does this make sense to you? I think with it you can start digging a little bit more and see if with this is enough or you need to implement more stuff. Hope it helps!

Related

Live video streaming webrtc?

So I want to be able to let one client distribute live video feed to alot of other clients only watching. Is it possible to use WebRTC for this? Or will I basically have to go through a service like ustream or whatever.
It is possible, with a caveat. One client can establish many outgoing P2P connections to every viewer and stream video to them directly. However, for more than a very small handful of viewers this will quickly saturate the origin's bandwidth and possibly CPU. You would not be able to serve many viewers this way; however this would work entirely without middleman.*
* Safe for the WebRTC connection negotiation server.
To be able to serve a larger number of viewers you'll have to use a centralised distribution server. The source would send exactly one video stream to that server, and the server would stream it to anyone who's interested. This still requires that server to have a beefy CPU and a lot of bandwidth; but this is more realistic to scale than the client side.
You may need to spend a lot of money on such a server; have a look at c3.xlarge and better instances at AWS to get an idea. Using an established, cheap infrastructure like ustream may indeed be the more realistic option.

Audio and video conference with NodeJS

I would like to build a web application that lets two peers see and hear each other using video and audio streaming with HTML5 and no plugins (except for IE, that I pretend to use getUserMediajs to use a flash fallback).
I also want to transmit that data using NodeJS but I have no idea where to start. In an example:
Peer A <---> Node JS <---> Peer B
I'm interested in this Peer 2 Server 2 Peer approach instead of a Peer 2 Peer solution like PeerJS because:
1) I think it will be more compatible will all browsers. If this is not entirely true, please let me know.
2) PeerJS (which I'm not interested in) relies on black magic STUN-TURN-ICE signaling for some cases. I read somewhere that only the 70% of the connections are suitable for this kind of transmission, and I can not afford a loss of the 30%. Again let me know if this is no entirely true.
I have already played around with socket.io and know the concepts of getUserMedia() to get user's webcam, but don't know how to link that with socket.io and transmit that to the other client.
The browser compatibility has nothing to do with adding a server side component. You could be p2p, or p2s2p, if what you send is not recognized by the receiving browser, it won't work.
ICE is mandatory for webrtc, you can't do without, period. by default, you can only connect to computers in the same network (hosts candidates). If you provide a STUN server, you will be able to connect in 70% of the cases all together, much less in enterprise context. http://webrtcstats.com/webrtc-revolution-in-progress/ has the latest stats from some vendors. You can see that for social sites, as of june 2014, 92% of the calls can work through firewalls and NAT using simple STUN. The remaining of the called needed to be relayed through a TURN server. You have a lot of Free STUN server providers out there, this is the minimum you should use.
webRTC for Desktop IE and Safari.
While flash callbacks are interesting (read, easy) they pause two problems:
They do not generate video stream compatible with peer connection or with HTML5. Not being compatible with peer connection means, you cannot send the images or the video, but only use it locally. Not being compatible with HTML5 means you cannot use the generated image and video in a element, and you do not have anyway simple way to render it outside of a flash plugin element. In the case of the shim you point to, they copy over from the flash plugin to HTML each single frame, and they mention in the read me that this is too computationally extensive to be used for lived video.
flash use different protocols (RTMP, RTMFP, ..) and codecs from webrtc and they are not mutually interoperable. You would need to maintain both separately or to have a complicated, dual use infrastructure to deal with it. OpenClove is a vendor that propose such dual purpose infrastructure for example.
Another solution is to install on Desktop IE and Safari a webRTC plugin (not flash), which implements "pure" webRTC. In which case, you can directly interoperate with chrome, firefox, opera, and any other browser which implements natively webRTC 1.0
We propose such a plugin, for free (no cost) and for all (not vendor specific) here
Whatever you do, you need WebRTC support on the browser ("no plugins"). So, the "it will be more compatible will all browsers" is a moot point because browser support

Listening on open port using just HTML5 technology - is it possible?

I'm creating a game with HTML5 and javascript, but am having trouble finding a way to get networking working.
What I want is for one instance of the game to listen on a Websockets/http stack, while the other instances connect to it.
So far, I'm yet to find any way of doing it that doesn't require additional plugins or online services. (ie: Flash or silver light opening the socket and pumping messages back - Something that isn't acceptable for mobile, or an online server like Player.IO, which while much better than Flash, wouldn't work for Wifi networks that are disconnected from the Internet)
While the latter option is a compromise I'm willing to make, I was wondering if it's one I need to make, or if I could survive without it.
Well, if I understand what you are trying to, hope to be right.
Client One:
Plays game, listens to incoming data from Client Two
Client Two:
Plays game, connects to Client One
I'm guessing it's a P2P game? If this is the case, I think you want to look at WebRTC.
Otherwise, peer-to-peer is not really possible unless you run a mediator service that both clients connect to and handle it as a dispatcher.

Is it possible to develop HTML5 social game (like poker) without including server(just client side app) in which multiple users are connected?

Basically I want make a social game like poker, in which multiple players gets connected.
But I don't want server side interaction.
So my idea was to make one person's browser acts like server and other acts as client.
Person A's browser will hold the data( client side DB) and communicate with person B via web sockets or something.
I am not sure if two browsers can some how connected with sockets. Either web sockets(HTML5) or any flash plugin which can help to IP to IP connection. Is it possible somehow?
As of now I am not even sure how users will connect for starting the game. I may need to put server for initial connection.
Currently you cannot do this with web browsers because they cannot act like a server. It will probably be possible in the future though: Chrome is experimenting with a Socket implementation (this is an experimental feature in Canary releases that is disabled by default). Node.js has already (partially) been ported using chrome.socket.
With Flash, it is possible to create p2p multi player games since version 10, but you still need a central server to setup the initial connections between players. There's a library for it here: http://www.flashrealtime.com/p2p-game-lib/.
Be aware that p2p multi player games make it easier for players to cheat, because the authority lies with one or multiple players, and not with a central server.
It's not possible for a browser to accept a websocket request so a server is necessary for you to handle such a request. For how to set up a websocket server, I recommend socket.io that is based on node.js. It's very easy to use and it's only javascript that is a language that web developers get used to.

When using WebRTC, is a peer-to-peer architecture redundant to build a video chat service like Skype?

We're playing around with WebRTC and trying to understand its benefits.
One reason Skype can serve hundreds of millions of people is because of its decentralized, peer-to-peer architecture, which keeps server costs down.
Does WebRTC allow people to build a video chat application similar to Skype in that the architecture can be decentralized (i.e., video streams are not routed from a broadcaster through a central server to listeners but rather routed directly from broadcaster to listener)?
Or, put another way, does WebRTC allow someone to essentially replicate the benefits of a P2P architecture similar to Skype's?
Or do you still need something similar to Skype's P2P architecture?
Yes, that's basically what WebRTC does. Calls using the getPeerConnection() API don't send voice/video data through a centralized server, but rather use firewall traversal protocols like ICE, STUN and TURN to allow a direct, peer-to-peer connection. However, the initial call setup still requires a server (most likely something running a WebSocket implementation, but it could be anything that you can figure out how to get JavaScript to talk to), so that the two clients can figure out that they're both online, signal that they want to connect, and then figure out how to do it (this is where the ICE/STUN/TURN bit comes in).
However, there's more to Skype's P2P architecture than just passing voice/video data back and forth. The majority of Skype's IP isn't in the codecs or protocols (much of which they licensed from Global IP Solutions, which Google purchased two years ago and then open-sourced, and which forms of the basis of Chrome's WebRTC implementation). Skype's real IP is all in the piece of WebRTC which still depends on a server: figuring out which people are online, and where they are, and how to get a hold of them, and doing that in a massively decentralized fashion. (See here for some rough details.) I think that you could probably use the DataStream portion of the getPeerConnection() API to do that sort of thing, if you were really, really smart - but it would be complicated, and would most likely stomp on a few Skype patents. Unless you want to be really, really huge, you'd probably just want to run your own centralized presence and location servers and handle all that stuff through standard WebSockets.
I should note that Skype's network architecture has changed since it was created; it no longer (from what I hear) uses random users as supernodes to relay data from client 1 to client 2; it didn't scale well and caused rampant variability in results (and annoyed people who had non-firewalled connections and bandwidth).
You definitely can build something SKype-like with WebRTC - and more. :-)