I've spent plenty of time solving this problem, but it looks like I need some help. I have a web conference application which provides ability to stream live video, chat, share documents, draw on a whiteboard, share desktop, etc. And now I want to record everything that happens in taken separately so called webinar, including video and sound. So I'm looking for tools that can help with this goal.
Here's input data:
This is Adobe Flash based application
Using wowza server
Everything should be recorded on server
Many webinars can be in recording mode at the same time
Record should be represented in video (flv, mp4 or whatever)
What I've done so far and what I problems I have:
I have implemented recording on server side. But this is not a video, this is just a list of commands to recreate passed webinar. It works, but has lot's of limitations and problems with rewinding.
And now I'm testing this FLV Encoding library. I created AIR application that starts on server when record is needed, connects to taken webinar and takes screenshots from itself with BitmapData.draw() method. Works pretty neat, but has some limitation that I'm looking help with:
First of all, this is sound problem. I have no idea how to catch all
sounds from all sources in flash. So far from my tests and googling I conclude that SoundMixer.computeSpectrum() won't help me to do this. Maybe this can be done on server side by mixing all streams on the right time but I think this can lead to synchronization problems and I prefer to capture sound on client. Maybe there is way to capture audio byte array from rtmp stream somehow?
Security problems. We have 2 kinds of them. First ones are with streaming videos. BitmapData.draw() method throws exeptions even after adding <StreamAudioSampleAccess>true</StreamAudioSampleAccess>
<StreamVideoSampleAccess>true</StreamVideoSampleAccess> on server. There are lots of posts about this problem and no good solution.
But more complex problem is that YouTube videos can be opened in webinar using api player. And in this situation I have no idea how to resolve security problem. Maybe someone knows a way or workaround to use BitmapData.draw() on YouTube AS3 player?
Or maybe there is another good way to solve my recording issue?
Free Apache Openmeetings conferencing [1] has a java recording application inside which should work in 3.0 release. Just use it.
[1] http://openmeetings.apache.org/
Related
I want to publish a mp3 file from one of the peers and play it from other peers, very similiar to a RTMFP chat app.
From what I understand till now:
netstream.publish is used to publish a stream to RTMFP netconnection and netstream.play for playing a stream from other peers.
The steps for streaming mic and camera capture are:
netstream.attachCamera(cam);
netstream.attachAudio(mic);
netstream.publish('video');
However I don't see a way to publish (stream) mp3 files using Netstream. Note that using Netstream is essential since I want to "publish" audio to listening peers.
Please correct me if I was wrong above. Ideally what I want to achieve should be easily possible, yet I can't find any pointers for it. Is is possible to use ByteArray for the same. Any alternative streaming strategy would be welcome as long as it works with RTMFP. Links to code examples would be great too.
You stumbled upon one of the strange quirks of NetStream. It can publish sound from the microphone, but not from a arbitrary sound source. There is some workarounds, some more complex than the others.
Streaming through a virtual microphone. The easiest workaround, and the best (IMO) if your project allows you to use it. You just have to install a virtual microphone/camera software (ex : ManyCam) and use it to stream your mp3 file(s) through a virtual microphone. With that done, you just have to bind this microphone to your AS3 app. Sadly, it doesn't work for your project, since you can't reasonably ask of the publishing peer to install a virtual microphone.
Streaming usingSound.extract(), NetStream.send() and SampleDataEvent.SAMPLE_DATA. As you might know, NetStream.send() can be used to send messages to peers. The thing is, those messages are serialized and can be ByteArray. Thus, you can send audio data samples with NetStream.send(). The publishing pee apps can obtain the data samples with Sound.extract(), and the receiving apps can play them thanks to the SAMPLE_DATA event. One of the problems will be to know when you should send new samples. To manage that, you would recommend also using the SAMPLE_DATA in the publishing app and send new data each time the SAMPLE_DATA event occurs. The main issue with this method is that since you don't use the standard way of RTMP to stream audio, it needs a custom app for the receiver to play it. Given what you said of your project, it shouldn't be a problem.
Reproducing the RTMFP protocol using Socket. It would be long, very complex, and error-prone. I would never recommend to do this except perhaps as a learning experience. You would need to read, understand and implement most of the RTMFP Specification.
I successfully created an application where i can record microphone sound using flash and then save that stream to a server called "Red5" .
But lately i came across a strange requirement of capturing the output volume from machine and then saving that stream to red5 server.Like if i listen to a song or a skype call or listening to any other sound.I just want to capture those sounds.
I searched for this sort of situation just to get an headstart but i havent found any solution so that i can proceed with it.
Can anyone here provide a start up solution for this.
Can this be done through flash?Or any other way ?
Any help will be appreciated.Please provide suggestions
Thanks
Certainly there are ways to get this done, however using Flash as is I don't see it being possible (most likely a security concern, don't want ads arbitrarily recording peoples conversations).
Programs like Audacity are capable of recording off of the StereoMix output of a computer which is essentially what you're asking about. You could potentially build an AIR app that includes an ANE that packages the functionality from Audacity, but would require quite a bit of porting and... well time.
I'm looking for a way to securely deliver video to mobile devices. There are two options:
HLS in tag. This works very nicely for iOS and supports adaptive bitrate, perfect for mobile. However, is seems to only work well on iOS. There seems to be only fragmented support for it on Android. I've read that Android has officially supported it since 3.0, but on all the android devices I've tested (>3.0), HLS hasn't played back on the browser.
Progressive download in tag. This will work on iOS and Android devices fine, but the concern is that since it's just a progressive download of the video, that the user find a way to just grab that video once the browser has finished downloading it. This may be more difficult on iOS, but I'm sure it's not that hard to figure out where the browser stored the video download in a tmp folder somewhere.
Either method I'd say can be protected from deeplinking by using an expiring token approach, where the token is generated serverside with a secret key that only the content server knows about. The video request would only be valid for 5 or 10 minutes, would would kill of deeplinking.
Is anyone aware of any way around these issues? Even if I was able to prevent deeplinking, the user could still get the video itself and re-distribute. Perhaps it's just not possible?
Thanks
Rule #1 of the internet:
If you don't want someone stealing it, don't put it online.
Welcome to the circumvention arms race. Brought to you by DownloadHelper.
There's nothing you can do to stop someone who really wants to pirate your video. There are various measures, like those you mention, that make it more difficult, but someone who really wants to copy it could find a way to capture it from memory, or even just point a camera at the screen and record the playback of the video.
It's the same way you protect your car. You install a steering lock, an alarm and an engine immobiliser, and then someone comes alongs and pulls the car onto a flat-bed truck and drives away with it.
Bottom line - you can't stop a determined thief, but you can make theft more difficult so that you're not the most attractive target.
As I was reading the above I could easily get pass all these techniques pretty quickly.
For a project I can't describe too much because of nda, we created our own protocol based on a well known encryption method can't mention that either , military grade) , encoded packets on the server to the protocol, and decoded on the device.
unfortunately this isn't perfect either because a lot of mobile apps can be re-versed engineered and once you get the key game over, very easy on android, of course you could periodically recycle the key, in which case even if they decompiled the android app and got the key it wouldn't work very long.
This is a lot of work and can't be implemented with html5 or hLS or event rtsp.
It also requires a custom server application that takes the video stream re-transmits it with the custom protocol.
On the other hand the protocol was transport agnostic, which meant we could use a variety of transports, tcp, IAP and bluetooth. Also would work on all mobile / desktop platforms.
The other little requirement, is couldn't use a browser, have to be a custom app.
My Requirements are similar to this old question of 2009. I am just re-posting since OP is kind of 2 years old and the question is closed now.
How can I transmit live video Stream over a Socket using Flex / ActionScript 3.0 ?
I am developing an application which works on P2P architecture so I cant use FMS for live media streaming. I have read about NetConnection and NetStream classes but cant start using them since all the examples are using FMS. How I do this ?
Secondly, I also need a suitable library / tool / technique to encode (& than decode) video frames before displaying & transmitting. For this I have read X264 codec but using this with Flex seems too complicated. Any other alternatives ?
Any tutorial / blog will be of great help...
You can send data directly to a remote machine; yet, that machine would need to be listening, and unless you are using Air, that machine would need a socket policy file. Obviously not being able to connect multiple machines directly to each other without a policy file, forces you to have a central server, and prevents straight forward implementations of in browser p2p chat/video/(w/e) applications.
So you have to have a central server; however, you don't have to pay for one.
I knew I had read about this somewhere, so I searched google and came up with the links below.
http://haxe.org/doc/flash/peer2peer
https://github.com/OpenRTMFP/Cumulus
All you need is a developer key, that Adobe gives away for free at:
https://www.adobe.com/cfusion/entitlement/index.cfm?e=stratus
With the above being said, you will need to know some c++, in order to take proper advantage of this.
If you want to learn how to do something basic to get you started, and you are really just interested in developing something for your local network, then these articles tell you how to do RTMFP as a multicast Group:
http://www.flashrealtime.com/videotutorial-remote-device-controller/
http://www.flashrealtime.com/local-flash-peer-to-peer-communication-over-lan-without-cirrus/
[EDIT: the content for the last two links, as the site removed those pages, can currently be found using the waybackmachine and the snapshots around early 2011]
There is a ton of stuff that you can do with the information above; yet, I would start small.
You will need anyway a media server, either FMS or SmartFox or else and have both parties connecting to the server. It will also be dealing with the encoding. I don't think you can do that without a media server in between...
I have an interesting project wherein I need to allow users to capture video of themselves with a webcam at a kiosk, after which I email them a link to their video. The trick is the resulting video needs to be a 'slow motion' version of the captured video. So for example, if someone creates a 2 minute movie, the resulting movie will be 4 minutes.
I'd like to build this in Flex / AS3 if possible. I don't have issues capturing the video and storing it / generating and emailing a link, but slowing down the video is the real mind bender. I'm unsure how to approach 'batch post-processing' a set of videos using Adobe tools.
Has anyone had a project similar to this or have suggestions on routes to take in order to do this?
Thanks!
-Josh
This is absolutely feasible from the client side, contrary to what some may believe. :)
http://code.google.com/p/flvrecorder/
Just adjust the capture rate, which shouldn't be too difficult all the source is there.
Alternatively, you could write an AIR app that launches Adobe Media Encoder after writing a file and launch it with a preset that has FTP info etc. Or you can just use the socket class to connect and upload over FTP.
http://code.google.com/p/fl-ftp/
It is not feasible to do this client-side.
Capture the video and send it to the server.
Use a library like FFMpeg to do your coneversions