I'm currently stuck with this problem and I hope somebody can help me out. I'm trying to create some sort of decoder that will convert a video stream that will act as a video input device so I can use it in Wirecast (video streaming program).
At this stage I use mjpeg IP cameras as video sources using this neat little program that allows me to convert a raw IP address:port into a input device, this works perfectly with unlimited cameras but does not support RTSP nor H.264, I have since upgraded a few cameras so I can get access to HD video.
I have tried a number of RTSP source filters from all over the net, and some programs like xpwebcam to get access to their H.264 filter but no luck as yet. I have tried to create my own filter using GraphStudio but it is beyond my understanding.
The IP cameras video feed URL looks like this..
Video Feed:
rtsp://xxx.xxx.xxx.xxx/0/video0
where videoX = 0,1,2 for resolution.
rtsp://user:pass#10.0.0.10/0/video0
or rtsp://#10.0.0.10/0/video0 for non-protected cameras, it's a private network so it does not matter, what ever will work.
I can successfully stream the video feed live using VLC but not much else, I'm not sure if there's a way to turn a stream into a input device.
I have been trying to do this for weeks now but had very little luck in getting it to work.
Please help me :)
As a professional photographer with many years in the field this question struck me as rather interesting. The answer you are looking for can be found at this site:
http://alax.info/blog/1416
The site lists the update you need for your equipment.
If you have no source filter can't you simply read from source and write to a file and have your other program read it from a file simultaneously. I have used such a trick many times on unix. Can't see why it cannot work here.
Related
I've got a website and I've been looking for ways to embed a 24/7 webcast. I've looked at options such as Ustream and Justin.TV however, these do not work on mobile devices, which is what I really need.
I don't have that much knowledge on how streaming works but I've read that the streaming Engine Wowza is another option. I also found that HTML 5 player works cross platform and on any mobile device aswell.
If I were to use Wowza would it work with HTML 5 player? And am I even going in the right path with how I can do this. I also have a home dedicated server for streaming to a cloud wouldn't be required.
I'm very amateur just trying to broadcast my television program on my website for viewing. Any advice would help here. Thanks
Wowza can packetize video as http live streaming (HLS) which, although an Apple invention, works on most HTML5-capable browsers except IE11: http://www.jwplayer.com/html5/hls/ . Many players will fall back to using Flash for browsers which don't support native HLS or H.264 encoding. Flash uses http dynamic streaming (HDS) rather than HLS, so you would add that as another packetizer in wowza. (Wowza calls these packetizers "cupertinostreamingpacketizer" and "sanjosestreamingpacketizer" respectively.)
You would then point your preferred HTML5 video player (jwplayer, flowplayer, etc) at the URL http:// your-wowza-server.com:1935/live/yourstreamname/playlist.m3u8 [1]. For Flash fallback in flowplayer you can use the f4m resolver and the http-streaming plugin, as in the first example here, to access the subtly different URL http:// your-wowza-server.com:1935/live/yourstreamname/manifest.f4m. I'm sure something similar applies in players like jwplayer and others.
The main problem with Wowza is how much it costs: for your own server you're looking at around $55 per month per channel [2]. At least during testing, you may find it cheaper to get Wowza on Amazon EC2 devpay: $5/month rental plus an extra couple of cents per hour on your normal EC2 instance costs.
[1] Assuming you're using Wowza's default /live/ application on port 1935
[2] A channel is roughly the number of streams you're sending to the server to be re-broadcast
We developped a custom HTML5 player which we wanted to make compatible with HLS and fragmented mp4 for LIVE events. We started on Zencoder but realize they were not able to do genrate fragmented mp4.
I would like to explore the flash fallback solution and the wowza( probably on AWS) for the packaging.
Would you be available to consult on this project?
We use www.bitcodin.com for event-based or 24/7 live transcoding and streaming. It generates DASH - which can be playback natively in HTML5 using the bitdash MPEG-DASH players - as well as HLS for iOS devices. You can find an example here: http://www.dash-player.com/demo/live-streaming/
I've spent plenty of time solving this problem, but it looks like I need some help. I have a web conference application which provides ability to stream live video, chat, share documents, draw on a whiteboard, share desktop, etc. And now I want to record everything that happens in taken separately so called webinar, including video and sound. So I'm looking for tools that can help with this goal.
Here's input data:
This is Adobe Flash based application
Using wowza server
Everything should be recorded on server
Many webinars can be in recording mode at the same time
Record should be represented in video (flv, mp4 or whatever)
What I've done so far and what I problems I have:
I have implemented recording on server side. But this is not a video, this is just a list of commands to recreate passed webinar. It works, but has lot's of limitations and problems with rewinding.
And now I'm testing this FLV Encoding library. I created AIR application that starts on server when record is needed, connects to taken webinar and takes screenshots from itself with BitmapData.draw() method. Works pretty neat, but has some limitation that I'm looking help with:
First of all, this is sound problem. I have no idea how to catch all
sounds from all sources in flash. So far from my tests and googling I conclude that SoundMixer.computeSpectrum() won't help me to do this. Maybe this can be done on server side by mixing all streams on the right time but I think this can lead to synchronization problems and I prefer to capture sound on client. Maybe there is way to capture audio byte array from rtmp stream somehow?
Security problems. We have 2 kinds of them. First ones are with streaming videos. BitmapData.draw() method throws exeptions even after adding <StreamAudioSampleAccess>true</StreamAudioSampleAccess>
<StreamVideoSampleAccess>true</StreamVideoSampleAccess> on server. There are lots of posts about this problem and no good solution.
But more complex problem is that YouTube videos can be opened in webinar using api player. And in this situation I have no idea how to resolve security problem. Maybe someone knows a way or workaround to use BitmapData.draw() on YouTube AS3 player?
Or maybe there is another good way to solve my recording issue?
Free Apache Openmeetings conferencing [1] has a java recording application inside which should work in 3.0 release. Just use it.
[1] http://openmeetings.apache.org/
I am currently looking for how to accomplish what I have been told is possible.
I was told that we would be able use vlc to stream a webcam in linux which would allow for the following:
Recording the stream to the local machine for a later upload.
Play the stream as it's recording using Chrome's HTML5 video capabilities.
Send a start and stop command from the web for the vlc recording.
I have been researching this for quite some time and haven't been able to find a viable solution.
I am able to record video using VLC already with the following
vlc v4l2:// :v4l2-dev=/dev/video0 :v4l2-width=640 :v4l2-height=480 --sout "#transcode{vcodec=mpeg4,acodec=mpga,vb=800,ab=128}:standard{access=file,dst=capture_4.avi}"
Is this really even possible?
To answer your question if this is possible... YES it is BUT it's tricky. I can't answer all your points, only the part with streaming in VLC and displaying it in HTML5
You'll need a certain environment setup for this to work (Segmenter and correct MIME Type on server). I assume you are all on linux; which I am not (Mac OS / unix) but the principles behind it stay the same concerning the workflow of getting this to work. I'll try to explain - hope this helps in any way.
The setup I've had success with works the following way:
(1) STREAMING & RECORDING
local vlc streaming instance streaming audio and video -> producing a mpegts stream. Try changing your command to something like
vlc v4l2:// :v4l2-dev=/dev/video0 :v4l2-width=640 :v4l2-height=480 --sout "#transcode{vcodec=mpeg4,acodec=mpga,vb=800,ab=128}:standard{access=udp, mux=ts, sap, name=live-video, dst=224.0.0.1, port=1234}"
or
vlc v4l2:// :v4l2-dev=/dev/video0 :v4l2-width=640 :v4l2-height=480 --sout "#transcode{vcodec=mpeg4,acodec=mpga,vb=800,ab=128}:udp{dst=224.0.0.1,port=1234,mux=ts}"
I'm just giving you ported commands here which work on Mac. I don't know if they work on linux. Now you should be able to play the live stream with VLC by accessing the SAP announcement or directly with
vlc -vvv udp://#224.0.0.1:1234
You could then use another vlc instance to record the stream
vlc udp://#224.0.0.1:1234 --sout "#transcode{vcodec=mpeg4,acodec=mpga,vb=800,ab=128}:standard{access=file,dst=capture_4.avi}"
There is a duplicate command in VLC which I have been playing around with but without success. This way you could stream and record with one instance. Maybe this works on linux.
(2) SEGMENTING
mediastreamsegmenter to segment your mpegts stream into deliverable segments. I'm using Apple Server Software. Apple provides you with a mediastreamsegemnter which can take a live mpegts stream and convert it into segments which are added to a playlist. I don't know of a live segmenter in linux. Maybe someone else does.
(3) DELIVERY
html 5 page linking to the video playlist containing the segments. The mediastreamsegmenter will produce a playlist playlist.m3u8, which then can be accessed with HTML 5
<video width="640" height="480">
<source src="YOUR_PATH/playlist.m3u8" />
</video>
Some helpful tutorials concerning this topic are:
Info on the setup and basic commands
VLC examples in order to stream
I know this is not a complete solution to your problem, but this will maybe give you some nice starting points to look into.
I have an interesting project wherein I need to allow users to capture video of themselves with a webcam at a kiosk, after which I email them a link to their video. The trick is the resulting video needs to be a 'slow motion' version of the captured video. So for example, if someone creates a 2 minute movie, the resulting movie will be 4 minutes.
I'd like to build this in Flex / AS3 if possible. I don't have issues capturing the video and storing it / generating and emailing a link, but slowing down the video is the real mind bender. I'm unsure how to approach 'batch post-processing' a set of videos using Adobe tools.
Has anyone had a project similar to this or have suggestions on routes to take in order to do this?
Thanks!
-Josh
This is absolutely feasible from the client side, contrary to what some may believe. :)
http://code.google.com/p/flvrecorder/
Just adjust the capture rate, which shouldn't be too difficult all the source is there.
Alternatively, you could write an AIR app that launches Adobe Media Encoder after writing a file and launch it with a preset that has FTP info etc. Or you can just use the socket class to connect and upload over FTP.
http://code.google.com/p/fl-ftp/
It is not feasible to do this client-side.
Capture the video and send it to the server.
Use a library like FFMpeg to do your coneversions
Is there any media player solution that will play audio and video files in Firefox,Chrome, Safari and IE.
I've tried MediaElementJS but it fails on .mov. This project has clients uploading a movie file and there can only be one version of each file. I can programatically change the code for each type of file and the user's OS/browser but I still couldnt get .mov's to download progressively.
What am I missing here? I'm not very familiar with media file types. Just wondering if anyone had any suggestions.
Take a look at the JW Player. It's highly configurable. Best combination is with a real streaming server provider. If you want to let your clients switch to different positions in your media files it might best work that you "normalize" all your different media types to one format (converting them after the upload) - be it .flv/flash - and focus on one player like the above. The files could be streamed with modules from webservers like nginx or lighttpd - but a real provider like Bits on the Run will convert most of the files for you very easily and handle the streaming more reliably.