My Requirements are similar to this old question of 2009. I am just re-posting since OP is kind of 2 years old and the question is closed now.
How can I transmit live video Stream over a Socket using Flex / ActionScript 3.0 ?
I am developing an application which works on P2P architecture so I cant use FMS for live media streaming. I have read about NetConnection and NetStream classes but cant start using them since all the examples are using FMS. How I do this ?
Secondly, I also need a suitable library / tool / technique to encode (& than decode) video frames before displaying & transmitting. For this I have read X264 codec but using this with Flex seems too complicated. Any other alternatives ?
Any tutorial / blog will be of great help...
You can send data directly to a remote machine; yet, that machine would need to be listening, and unless you are using Air, that machine would need a socket policy file. Obviously not being able to connect multiple machines directly to each other without a policy file, forces you to have a central server, and prevents straight forward implementations of in browser p2p chat/video/(w/e) applications.
So you have to have a central server; however, you don't have to pay for one.
I knew I had read about this somewhere, so I searched google and came up with the links below.
http://haxe.org/doc/flash/peer2peer
https://github.com/OpenRTMFP/Cumulus
All you need is a developer key, that Adobe gives away for free at:
https://www.adobe.com/cfusion/entitlement/index.cfm?e=stratus
With the above being said, you will need to know some c++, in order to take proper advantage of this.
If you want to learn how to do something basic to get you started, and you are really just interested in developing something for your local network, then these articles tell you how to do RTMFP as a multicast Group:
http://www.flashrealtime.com/videotutorial-remote-device-controller/
http://www.flashrealtime.com/local-flash-peer-to-peer-communication-over-lan-without-cirrus/
[EDIT: the content for the last two links, as the site removed those pages, can currently be found using the waybackmachine and the snapshots around early 2011]
There is a ton of stuff that you can do with the information above; yet, I would start small.
You will need anyway a media server, either FMS or SmartFox or else and have both parties connecting to the server. It will also be dealing with the encoding. I don't think you can do that without a media server in between...
Related
Is there a way to stream a video and audio on a website just to the clients, using a camera installed on the server - for instance, like youtube does ?
I've started reading webrtc, but if I use webrtc I should create a stun/turn server and other things, which for one way stream I think is not necessary (this is just my understanding of the things..) because I don't need anything from the clients, literally, neither their video, or audio..
So is there a way to achieve this using html5, streaming just in one direction:
server (camera) -> clients
Is there something about this out there, or should I stick with webrtc ?
I'm going to explain a possible solution for this scenario, there might be others, but I hope mine gives you a rough idea of how you could do it and a start point to explore more about the amazing possibilities of WebRTC. Please let me know if something is not understood.
So, WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. Sweet, that is: WebRTC has a quite good browser support (not in every browser though, Safari just started supporting it a month ago with Safari 11). But in this case we want to use WebRTC in the server side. At the end of the day we can still think about peer-to-peer real time communication, where one of our peers is the server.
I don't know if you are familiar with Node.js, but I recommend you to write your Server app with it (<3 Javascript!):
There are a few libraries that wrap WebRTC functionality to be used in the server side, like node-webrtc and node-rtc-peer-connection.
But I recommend you to to take a look at electron-werbrtc, since
the others might be using deprecated methods or be incomplete.
electron-webrtc runs a headless Electron client in the background to
use Chromium's built-in WebRTC implementation. So with it you should
be able to access the Camera in your server and create a stream to be
served to the other peer (the browser).
All above would be the WebRTC related tasks, in this case: streaming video peer(server)-to-peer(browser).
Now, let's talk about signaling process, stun and turn.
Signaling: imagine now a scenario peer-to-peer with 2 browsers, they want to establish a direct connection and stream video and audio between each other. But they don't know each other, like if I don't know your home address, I can't send you a letter. So they need a service that helps them know each other, so they can have the other's IP. This should be done by what is called "a signaling server". If somehow you know the other peer IP, you wouldn't need a signaling server.
STUN/TURN: the scheme above works perfectly in a local area network where each peer has its own IP address and there are no firewalls and routers between them. But otherwise, you can have peers behind a NAT or firewalls, and then your signaling server won't be able to make both peers to discover themselves. If you have peers behind a NAT, you'll need a STUN server, and if you have peers behind firewalls you'll need a TURN server. This is a bit simplified, but I just want you to have the general picture of when you might need STUN/TURN servers.
To better understand Signaling, STUN and TURN, there is a very graphic article that explains them perfectly.
Now, for your scenario:
I think you prob don't need STUN/TURN servers and also you prob don't need to implement the signaling process, because the browsers that are supposed to receive the stream from the server will know that server address, right? So they can establish a WebRTC connection with it.
EDIT: it is likely that you will need to implement some sort of handshake between the server and the clients (browsers), so this will be the signaling process. This is not part of WebRTC and this is why you need to implement it yourself. As I said, it is the way 2 peers can discover each other, but they also exchange information as their local media conditions, like codecs, resolutions they can handle, etc. For your case, your signaling server could be hosted in the same server you use to strea: you can build a small node.js app that runs there and that manages all the signaling process easily, it is not a big deal. I recommend you to read this article, and specially the section "How can I build a signaling service?". In general all WebRTC articles from that site are very helpful.
Does this make sense to you? I think with it you can start digging a little bit more and see if with this is enough or you need to implement more stuff. Hope it helps!
It's been a long time since I've needed to build a streaming media solution for the web since there are so many good services providing basic needs. Ages ago I had done streaming deployments with Red5 and a frontend player like Flow or JW, but despite a lot of searching I haven't been able to determine what the best modern options are to achieve a simple web solution that has good native compatibility with mobile and desktop. I'm hoping someone can suggest a good open source stack that would be a good fit for what I'm trying to do:
Mobile + Desktop Web Audio Support (with the possibility of some video at some point)
Streaming (not live on-air streaming, but the ability to seek and listen without downloading the whole file and only buffering enough to for good quality as to not waste bandwidth)
Track how much of a stream has been listened to (not the amount downloaded)
Protecting streams (this should be pretty easy either using headers or URL patterns that expire the link/connection after a certain point of time by writing that logic into the server side but I wanted to mention it anyways)
Lightweight, maybe 10 to 20 concurrent audio streams for now.
My initial thought, since my knowledge in the streaming space is dated, was to do it with RTMP, RTSP or HLS and something like JWPlayer for the frontend, depending on which protocol seemed to have the best support across the most devices. Or, if the client side support is good I could probably skip the media server and go with psuedo-streaming via the webserver. On the server side I could handle the access control to the streams and on the front end player I could use the API to keep track of the amount of seconds played and just ping that data in a ajax request to keep track of how much has actually been listened too (as opposed to downloaded/buffered).
The thing is I have a feeling that is probably an obsolete way to go about it in 2017. Even though I haven't turned up anything specific yet, I had a feeling perhaps there was a better solution out there utilizing Node and perhaps websockets to accomplish the same thing with a server and player coupled more tightly (server aware of playhead/actual amount played vs front end only) or a solution that leverages newer standards (MPEG-DASH?). Plus, in the past getting the other streaming protocols to be compatible across all devices was a pain and IIRC there is no universal HTML5 support for any given one of the older protocols across all browsers.
So what are the best current approaches to tackle something like this? Is a media server + a front end like JW or Flow still one of the better ways to go or are there better/easier ways to deploy a solution like this?
I would like to create a video chat web app, similar to "Chat Roulette",
I'd like to use Flash for both receiving and streaming video & audio.
The main problem is that it seems like they always redirect you to purchase a Flash Media Server license and usage, instead of allowing you to host a server yourself.
Is there any good known alternative to that, that I can host by myself and that works well with the flash streaming APIs?
Check out Red5:
https://github.com/Red5/red5-server
It's an open source Java-based media server.
There are many other alternatives out there, but since I've only used FMS and Red5, I can't vouch for other solutions.
Some that might also be worth considering:
Mammoth: http://sourceforge.net/projects/mammoth/
MistServer: http://mistserver.org/products
Please keep support and community activity in mind when choosing a platform, it might come in useful to choose something that has an active community.
I've spent plenty of time solving this problem, but it looks like I need some help. I have a web conference application which provides ability to stream live video, chat, share documents, draw on a whiteboard, share desktop, etc. And now I want to record everything that happens in taken separately so called webinar, including video and sound. So I'm looking for tools that can help with this goal.
Here's input data:
This is Adobe Flash based application
Using wowza server
Everything should be recorded on server
Many webinars can be in recording mode at the same time
Record should be represented in video (flv, mp4 or whatever)
What I've done so far and what I problems I have:
I have implemented recording on server side. But this is not a video, this is just a list of commands to recreate passed webinar. It works, but has lot's of limitations and problems with rewinding.
And now I'm testing this FLV Encoding library. I created AIR application that starts on server when record is needed, connects to taken webinar and takes screenshots from itself with BitmapData.draw() method. Works pretty neat, but has some limitation that I'm looking help with:
First of all, this is sound problem. I have no idea how to catch all
sounds from all sources in flash. So far from my tests and googling I conclude that SoundMixer.computeSpectrum() won't help me to do this. Maybe this can be done on server side by mixing all streams on the right time but I think this can lead to synchronization problems and I prefer to capture sound on client. Maybe there is way to capture audio byte array from rtmp stream somehow?
Security problems. We have 2 kinds of them. First ones are with streaming videos. BitmapData.draw() method throws exeptions even after adding <StreamAudioSampleAccess>true</StreamAudioSampleAccess>
<StreamVideoSampleAccess>true</StreamVideoSampleAccess> on server. There are lots of posts about this problem and no good solution.
But more complex problem is that YouTube videos can be opened in webinar using api player. And in this situation I have no idea how to resolve security problem. Maybe someone knows a way or workaround to use BitmapData.draw() on YouTube AS3 player?
Or maybe there is another good way to solve my recording issue?
Free Apache Openmeetings conferencing [1] has a java recording application inside which should work in 3.0 release. Just use it.
[1] http://openmeetings.apache.org/
I'm planning to realize the following project and would be thankful if somebody could verfy my approach!
I want to establish a fully bidirectional wireless realtime communication between a smartphone (cross platform) and a embedded microcontroller running a webserver.
The webserver should provide data of the connected hardware in realtime e.g. temerature.
The smartphone should render these on screen and you should be able to configure the hardware e.g led color with the smartphone and save the config to the embedded webserver.
My first guess was to use HTML5 websockets but they aren't available on all platforms so I got inspired by XBMC, which uses JSON-RPC.
Just imagine a car stero system with bluetooth connected to a µC with webserver and wifi dongle.
My plan is to implement a webapp on the webserver which lets serves the purpose mentioned above. But the tricky part is to get the user to establish a bluetooth connection to the stereosystem because i looked up similar questions which say you can't access stuff like bluetooth on the smartphone with HTML5.
long story short, this is the current idea:
hardware -> µC -> webserver -> HTML5 Webapp-> WIFI -> Smartphone
communication via JSON RCP.
I would be highly thankful if somesone could give a statement to said idea and planned implementation because I never done this before!
Thanks guys!
We at muzzley, have developed a framework to simplify this process. We provide a way for your browser applications to communicate with smartphones. In the side of the smartphone you have widgets that are already done (gamepad, drawpad, switch, swipe, others) or you can build your own html based widget.
(disclaimer: i work for this project)
Most of the work is already done for what you want to do :)
Quick start here:
http://www.muzzley.com/documentation/quick-start.html
You can pull from github several examples here:
https://github.com/muzzley/muzzley-demos/
Lib for browser:
http://www.muzzley.com/documentation/libraries/javascript.html
I hope it helps.
Best
I think your first instinct was probably right. Have you looked at socket.io for node? It's essentially a shiv which ensures that you can use websocket functionality in virtually any combination of device and browser (see list of supported transport mechanisms and browsers here).
It should allow you to avoid bluetooth altogether.