FFMPEG slow VP8 encoding - google-chrome

I am trying to encode video from my webcam into a VP8 stream. Sending a WebRTC stream from my webcam using Chrome looks pretty good and doesn't use a lot of CPU power. When I try to transcode my webcam stream to VP8 (webm) using FFMPEG then it's very, very slow.
On OS X I use the following FFMPEG options to generate a VP8 webm file. The source is a 720p Facetime webcam. It drains my CPU usage (late 2011 core i7 MBP) and the quality isn't very good:
ffmpeg -f avfoundation -i 'default' -y -qmin 11 -qmax 45 -b:v 500k -cpu-used 0 -deadline realtime test.webm
Which protocol is used for WebRTC and how can Chrome be so fast? I was under the impression that VP8 cannot be done in hardware. Using modern Intel CPUs you could use QuickSync, but I guess that is H.264 only and not supported by FFMPEG.

This is actually normal. Right now the WebM Project is still relatively small, with the only major adopter being Google's YouTube streaming service.
WebM encoding [using the VP8 codec] is extremely slow, but somehow the newer VP9 codec is even harder on consumer machines. It seems like it isn't too much of a problem for Google's massive servers, but the major benefit of WebM video [its highly effective compression] is its downfall for average users.
From the WebM Project site:
Encoding WebM videos seems really slow. What are you doing about that?
Today, encoding VP8 in "best quality" mode is the slowest configuration. >Using "good quality" mode with the speed parameter set between 0 and 5 will >provide a range of speeds. We believe that we can make substantial VP8 >speed improvements, especially with your help. We increased overall VP8 >decoder performance by ~28% in our October 2010 "Aylesbury" release and are >focusing on encoder speed improvements for our next named release.
Hope this helps!

Related

How do I live transcode a wmv file for playback in html5 player?

I've been working at this for quite awhile and still haven't found a solution that works. I need a way to live convert (transcode) a .wmv file for playback in a html5 web player.
I have a linux server (Apache) setup to stream video files through an html5 web player (Video.js) designed for Chrome and Firefox browsers. The file types I am dealing with are .mp4 (H.264), .mkv, and .wmv. The good news for me is that I can deal with mp4 and mkv natively, however I can't play wmv. Also, I have to deal with a lot of files, change periodically, and can be quite large.
After doing a lot of research and reading many times how you can't stream wmv directly, I came to the realization that I had two options. Either convert the file to a supported format or live transcode a file for use in the web player. Due to the amount of files and their size (and periodically changing) converting the file is simply not feasible. So I am stuck with live streaming/transcoding. I figured ffmpeg would be the way to go, but I have yet to figure out how to get ffmpeg to live stream into the html5 player.
So how do I take an existing .wmv file and live stream it in an html5 player?
The things I've tried so far:
Tried creating a m3u8 playlist and hoping it would magically work.
ffmpeg -i "hello.wmv" -s 640x480 -c:v libx264 -f ssegment -hls_flags delete_segments -segment_list playlist.m3u8 -segment_list_type hls -segment_list_size 10 -segment_list_flags +live -segment_time 10 out_%6d.ts
Simply copying to mp4 and streaming while ffmpeg still progressing. Obviously didn't work.
ffmpeg -i "hello.wmv" -vcodec copy video.mp4
Converting to webm format and streaming webm while ffmpeg still progressing. This actually did show the video for a few seconds in the html player:
ffmpeg -i "hello.wmv" -codec:a libvorbis -codec:v libvpx -b:a 128k -b:v 1200k video.webm
Ffmpeg is not required to be used (was thinking of vlc as well), the html5 player is required. Completely converting then streaming isn't a viable option because file sizes can be too large and change periodically. What command/program can I use to stream the file for playback in the html player?
After trying a LOT of different ways, I finally came up with a workable solution. Posting here for anyone that may come across it in the future. The solution I ended up with is using HLS (live stream) which segments the file. Using the output .m3u8 file I then used it in video tag for my html5 player.
The following is what I used in ffmpeg. Note that I set the preset to ultrafast (because libx264 was very slow by what I've seen). I'm sure there are more efficient parameters to use with ffmpeg and I will definitely continue to do more testing, but this is confirmed as working:
ffmpeg -i "hello.wmv" -preset ultrafast -c:v libx264 -f ssegment -hls_flags delete_segments -segment_list play_file.m3u8 -segment_list_type hls -segment_list_size 0 out_%6d.ts
In the html video tag, simply use:
<source src="play_file.m3u8" type="application/x-mpegURL">
Note for anyone who may come across this in the future: if you run into "file not supported error" when using the type x-mpegURL, then something is wrong with your source js. Make sure to have the hls.js (in my case videojs-contrib-hls.js) or it will throw the error. Took me a long time to figure out it wasn't the browser but the html js that was actually the issue.
I think this solution should work for nearly any video type that ffmpeg supports. Simply change around the input file and maybe mess around with the codecs if necessary.

Current best practice to stream live video in web browser?

We develop an IP camera product which streams H.264/MPEG4/MJPEG video via RTSP/UDP. It has a web interface, currently we use the VLC Firefox plugin to allow viewing of the live RTSP stream in the browser but Firefox are dropping support for NPAPI plugins so that's currently a dead end.
The camera itself is a relatively low-powered ARM SoC (think Raspberry Pi level) so we don't have vast spare resource to do things like transcode streams on-the-fly on the board.
The main purpose is to check the video stream is working correctly from the web interface, so streaming a new stream (or transcoding it) in some other format/transport/streaming engine is less desirable than being able to somehow play the original RTSP stream directly. In regular use the video is streamed via RTSP into a VMS server so that's not up for alteration.
In an ideal world the solution would be open-source cross-browser and happen inside an HTML5 tag, but if it works in one or more of the most popular browsers we'll take it.
I've been reading all sorts of stuff here and around the web about the brave new world of the HTML5 video tag, WebRTC, HLS, etc. and have yet to see anything that looks like a sensible and complete solution that doesn't involve some extra conversion/transcoding/re-streaming, often by some half-supported framework or an extra server in the middle which is not a viable solution.
I haven't yet found a proper description of what may or may not be required to "convert" our stream to whatever-html5-video-likes, whether it's just a slightly different wrapper around the same basic video stream or if there's a lot of overhead and everything is different. Likewise it's not clear if the conversion could be achieved either on-board or perhaps even in-browser using JS.
The reason for the title is that if we've got to change the way it all works we may as well aim to do whatever is considered "best practice" and reasonably future-proof as far as possible rather than some expedient fudge that might not work beyond the next round of browser updates / the next W3C press release...
I find it slightly disappointing (but perhaps not surprising) that in 2017 there seems to be no sensible way of achieving this.
Perhaps "least worst practice" would be more suitable terminology...
There are many methods you can use that don't require transcoding.
WebRTC
If you're using RTSP, you're much of the way there in sending your streams via WebRTC.
WebRTC uses SDP for declaring streams, and RTP for the transport of these streams. There are some other layers you need for setting up the WebRTC call, but none of these require particularly expensive computation. Most (all?) WebRTC clients will support H.264 decoding, many with hardware acceleration in-browser.
The easiest way to get started with WebRTC is to implement a browser-to-browser client first. Then, you can go a layer deeper with your own implementation.
WebRTC is the route I recommend to you. NAT traversal (in most cases) and P2P connectivity are built-in, so your customers won't have to remember IP addresses. Simply provide signalling services and your customers can connect directly to their cameras at home from wherever. Provide TURN servers, and they'll be able to connect even if both ends are firewalled. If you don't wish to provide such services, they're lightweight and can run directly on the camera in a mode like you have today.
Fragmented MP4 over HTTP Progressive with <video> tag
This method is much simpler than WebRTC, but totally different than what you're doing now. You can take your H.264 stream, and wrap it directly in an MP4 without transcoding. Then, it can be played in a <video> tag on a page. You'll have to implement the appropriate libs in your code, but here's an FFmpeg example that outputs to STDOUT, which you'd pipe to clients:
ffmpeg \
-i YOUR_CAMERA_HERE \
-vcodec copy \
-acodec copy \
-f mp4 \
-movflags frag_keyframe+empty_moov \
-
Others...
In your case, there's no added benefit to DASH. DASH is intended for utilizing file-based CDNs for streaming. You control the server, so there's no point in writing out files or handling HTTP requests in a file-like manner. While you can certainly use DASH with H.264 streams without transcoding, I think it's a waste of your time.
HLS is much the same. Your stream is compatible with HLS, but HLS is dropping out of favor rapidly due to its lack of flexibility on codec. DASH and HLS are essentially the same mechanism... write a bunch of media segments to a CDN and create a playlist or manifest indicating where they are.
Well, I had to do the same thing while back in a raspberry pi 3. we transcoded it on the fly using ffmpeg on the pi and used https://github.com/phoboslab/jsmpeg to stream mjpeg. then played it on the browser/ionic app.
var canvas = document.getElementById('video-canvas');
this.player = new JSMpeg.Player(this.button.url ,{canvas: canvas});
We were managing up to 4 concurrent streams with minimum delay <2-5 secs on our Pis.
But once we moved to React Native we used the RN VLC wrapper on the phones

HTML5 Audio Streaming from Live Source

I am looking into implementing live audio streaming from a vxWorks system into an HTML5 audio player. I have been researching for a while but am stuck on one key step.
Work Flow:
A host has a mic and speaks into the mic
Audio is received into the vxWorks OS. Here it can be encoded, packaged - anything is possible here
????
User opens web page to listen to the live audio in an HTML5 player.
I don't know what goes in step 3. Suppose I have now encoded the audio into Mp3. What technologies do I need to send this to the browser? I believe I can send this through HTTP procotol, but I am not understanding how that is packaged. Ie, how is audio packaged into an HTTP protocol. And what does the HTML5 player want as a source for this data. A URL? Or websocket data?
Thanks.
The solution for #3 depends on whether you need a low-latency live audio stream at the HTML5 player, or latency can be 10-30 seconds.
1. Latency can be high
You need a streaming server / HLS packager that will stream via HLS, and a webpage that hosts Flowplayer or JWPlayer which will play that HLS stream using HTML5 video tag. AAC-encoded (not mp3!) audio stream needs to be pushed (RTMP publishing is a most popular method) to that streaming server / HLS packager.
You could go with free nginx; it is portable to vxWorks and can stream out via HLS. You could also try free VLC or ffmpeg for the same.
If you make it work, the stream will be playable on any browser on any device - iOS, Android, Windows OS etc...
2. You need low latency
This is much harder. You would need another machine running Linux or Windows OS.
On that machine install Unreal Media Server or Evostream server - these servers stream to HTML5 players via Websocket protocol using ISO BMFF packaging.
Same as before, AAC-encoded (not mp3!) audio stream needs to be pushed (RTMP publishing is a most popular method) to that streaming server.
These streams will play on any browser / any OS EXCEPT iOS!

How does Screencastify Chrome App work?

I need to develop something similar, but all I've got so far is a Chrome app which uses Whammy.js for encode de webm images from a desktopCapture stream and encode them to a .webm video, but it is extremely slow (almost 5 minutes for 30 seconds) and I can't record the system sound.
I tested this Screencastify app and I think it does a pretty decent job, it records even in fullHd fast and also can record the system sound. But how does this work? AFAIK Chrome doesn't have an API for recording the system sound and encode video that quickly.
Screencastify uses Native Client.

RTP/RTSP Live Streaming Display on a web page

What is the best solution to display multiple live streams from a surveillance camera (so low latency is a requirement) in a web application (VideoWall-like)?
Personally I'm thinking about two possible solutions, but I can't choose between them:
1) Develop a custom Firefox plugin that uses ffmpeg to acquire and decode the video streams
2) Rely on HTML5 inserting a layer between the cameras and the web application that transcodes/restreams the video streams (probably using http live streaming)
The requirements are compatibility with H.264 over RTSP and Mpeg-4 over RTP and, of course, low latency and no loss on video quality.
Thanks
Andrea