How to measure decoding performance when using MSE or WebRTC? - html

For the test I'm thinking of using WebSocket to push stream to the client, video encoded as Fragmented MP4. Then the client decodes the stream ASAP using (MediaSource)MSE and (MediaStream)WebRTC along with HTML5 <video> tag. This is just a test, for real use case I'm targeting real-time live stream.
Is there a way for me to measure the frame by frame decoding time? i.e. how long takes the decoder to decode a frame and renderer renders a frame? Alternatively how i can get the real-time FPS for that?

Probably the closest you can get is by watching the webkitDroppedFrameCount and webkitDecodedFrameCount properties of the HTMLVideoElement over time. (Note, this only works in Chrome.) This isn't really going to give you the time for decoded frames, but will help you measure related performance.
The time to decode one frame isn't really all that useful to you. It's going to be the same, regardless of where the data came from. It's also going to be different from frame to frame. What matters is that the decoder can keep up with the playback rate.
I should also point out that there's no reason to use web sockets if you're only sending data one direction. If you're just streaming video data to a client, use regular HTTP! You can stream the response with the Fetch API and skip the overhead of web sockets entirely.

You can check some useful matrix several ways during WebRTC using.
webrtc-internals(Chrome only)
If you try to WebRTC, you can check WebRTC internal.
After create peerConnection object, in the Address Bar on Chrome, try to type following.
chrome://webrtc-internals
WebRTC Internals Document
WebRTC Externals browser extension for other browser
Then you can check some useful matrix.
FPS
on Stats graphs for ssrc_****_recv (ssrc) (video)
You can check frame rate with like googFrameRateDecoded googFrameRateOutput googFrameRateReceived value.
Delay
on Stats graphs for ssrc_****_recv (ssrc) (video)
You can check delay with like googTargetDelayMs googRenderDelayMs googJitterBufferMs.
More about these matrix to real practice, check this out.
https://flashphoner.com/oh-webcam-online-broadcasting-latency-thou-art-a-heartless-bitch/
WebRTC Standard Stats
Also you can access stats by standard way from peerConnection object.
WebRTC Standard Stats
WebRTC Stats API
https://www.w3.org/TR/webrtc-stats/#dom-rtcreceivedrtpstreamstats
RTCReceivedRtpStreamStats - jitter
https://www.w3.org/TR/webrtc-stats/#dom-rtcvideoreceiverstats
RTCVideoReceiverStats - jitterBufferDelayed

Related

Streaming adaptive audio on the web (low latency)

I am attempting to implement a streaming audio solution for the web. My requirements are these:
Relatively low latency (no more than 2 seconds).
Streaming in a compressed format (Ogg Vorbis/MP3) to save on bandwidth.
The stream is generated on the fly and is unique for each client.
To clarify the last point, my case does not fit the usual pattern of having a stream being generated somewhere and then broadcast to the clients using something like Shoutcast. The stream is dynamic and will adapt based on client input which I handle separately using regular http requests to the same server.
Initially I looked at streaming Vorbis/MP3 as http chunks for use with the html5 audio tag, but after some more research I found a lot of people who say that the audio tag has pretty high latency which disqualifies it for this project.
I also looked into Emscripten which would allow me to play audio using SDL2, but the prospect of decoding Vorbis and MP3 in the browser is not too appealing.
I am looking to implement the server in C++ (probably using the asynchronous facilities of boost.asio), and to have as small a codebase as possible for playback in the browser (the more the browser does implicitly the better). Can anyone recommend a solution?
P.S. I have no problem implementing streaming protocol support from scratch in C++ if there are no ready to use libraries that fit the bill.
You should look into Media Source Extension.
Introduction: http://en.wikipedia.org/wiki/Media_Source_Extensions
Specification: https://w3c.github.io/media-source/

Stream audio and video data in network in html5

How do I achieve streaming of audio and video data and pass it on the network. I gone through a good article Here, But did not get in depth. I want to have chat application in HTML5
There are mainly below question
How to stream the audio and video data
How to pass to particular IP address.
Get that data and pass to video and audio control
If you want to serve a stream, you need a server doing so, by either downloading and installing, or coding on your own.
Streams only work in one direction, there is no responding or "retrieve back". Streaming is almost the same as downloading, with slight differences, depending on the service and use case.
Most streams are downstreams, but there are also upstreams. Did you hear about BufferStreams in PHP, Java, whatever? It's basically the same: data -> direction -> cursor.
Streams work over many protocols, even via different network layers, for example:
network/subnet broadcast, peer 2 peer, HTTP, DLNA, even FTP streams, ...
The basic nature of a stream is nothing more than data beeing sent to an audience.
You need to decide:
which protocol do you want to use for streaming
which server software
which media / source / live or with selectable start/end
which clients
The most popular HTTP streaming server is Shoutcast by Nullsoft (Winamp).
There is also DLNA which afaik is not HTTP based.
To provide more information, you need to be more specific regarding your basic requirements and decisions.

Sending HD movie frames through sockets to Flash

I was wondering if someone has ever done something like this. I have a HD movie (or even 720p one) and I want to send it to a Flash client. I was thinking of using OpenCV in C++ for the decoding and sending part. I had even implemented some of this, but have problems with wrong packet size.
But my question is different, has anyone did anything similar to this? Can this give a chance for performance improvement? I have strong doubts about this, because I think the sending and decoding will be still difficult for the Flash machine. Looking forward to hearing some opinions from more experienced guys.
not a real answer, more like thoughts about your problem:
yes, you must encode HD images, sending 25 fps x 1.5mb over the net is a no-go.
gstreamer was build for exactly that purpose. complicated, maybe, but look at it anyway !
why write a program, when vlc can do all of this already ? (even headless/scripted!)
if there's audio to stream, too - forget opencv. it's a computer-vision lib, not build for your problem there
There are essentially two network protocols that are commonly used to send video from a server to a flash client, HTTP, and RTMP.
HTTP is a well-known standard, easily implemented because it is a plain-text protocol, that allows Flash Player to play on-demand video files, or do what is called pseudo-streaming.
RTMP is a proprietary protocol created by Adobe, that allows real-time streaming as well as video on demand, and can also transport structured binary data (the AMF format) to act as a remote procedure call protocol.
Although now documented, it is much more complicated to implement than HTTP, but there is an open-source library that implements this protocol, librtmp, found at http://rtmpdump.mplayerhq.hu/.
Please note that I have used librtmp with success, on the client side, to have a C program act as a Flash client to publish video on a FMS server. I have no experience of using it on the server side, I don't even know if it's possible at all.
In your case I certainly recommend using HTTP.
Now there is another problem to overcome, it is the fact that for video frames to be properly recognized, they must be embedded in a container that the Flash player can read.
Flash currently supports two container formats, FLV and F4V, the latter being a subset of the MPEG-4 container format.
Also, the video stream must be readable by Flash, and so it must be properly encoded into a format supported on the client-side, for example H.264, Sorensen, or VP6.
It is possible to directly send GIF, JPEG or PNG images as frames, as seen on page 8 of the official Flash Video Specification, but you must realize that in a HD resolution, this will be extremely inefficient, just imagine that at 25 FPS, a single image at 1920x1080 pixels in JPEG is much bigger than the equivalent H.264 frame.
So, in the end, my advice is: do not decode the video on the server, make sure it is in a format compatible with Flash, and use a well-documented protocol to send it as-is.

How can you throttle bandwidth usage on the receiving side of a video stream in ActionScript 3.0?

Right now I'm on a project that's moving video streams across RTMP using mostly ActionScript 3.0 (a little bit of 2.0 is used on the server side), and we already have functionality in place to throttle bandwidth usage for these video streams on the client level. However we're only doing that by calling the setQuality() method of class Camera, which affects every receiver of that video stream. Now though we really need a way to effectively set the bandwidth usage for individual receivers, but apparently VideoDisplay, NetStream, and NetConnection are all pretty much void of this sort of functionality. Is there not some decent way to do this in AS3? If there is a way, how? Thanks!
EDIT: For clarity let's say that the sender of the video stream has their Camera object's quality set to use 1 meg of bandwidth. How could I make a receiver of that stream only use half a meg of bandwidth to stream that video in without messing with the sender's 1 meg setting?
FMS just passes data received from publisher to the set of subscribers. It's doesn't change it (at least from the data point of view). What you require, though, is transcoding of the video stream being published according to subcriber needs.
Simple RTMP dosn't do that at all. I think there is a way to publish multiple streams for the same data using http streaming feature. But, in that case, the publisher would really be publishing multiple streams of the media to FMS.

Is it possible to stream live video to Flash Media Server via NetStream byte access?

So, I'm working with a video source that I'm feeding into my Adobe AIR application via some native extension work, with the goal of ultimately getting it to a Flash Media Server. The video is H.264 encoded and muxed into a FLV container, which aligns me with supported Flash Media Server codecs and NetStream (appendBytes) requirements. I can get the data into AIR just fine.
The mine I stepped onto today, however, is that documentation for NetStream.appendBytes states I must call NetStream.play(null):
Call this method on a NetStream in "Data Generation Mode". To put a NetStream into Data Generation Mode, call NetStream.play(null) on a NetStream created on a NetConnection connected to null. Calling appendBytes() on a NetStream that isn't in Data Generation Mode is an error and raises an exception.
NetStream.play() called with a null parameter yields local FLV playback. I can't publish the stream to FMS in this mode. But my research into Flash seems to indicate NetStream's byte access is my only real hope here when dealing with non-camera or non-web video data.
Q: Can I latch onto the video playback buffer for publish to a FMS? Can I create a sort of pipeline of NetStreams or NetConnections to achieve this? Or is there an alternate approach here for transmitting H.264/FLV data to FMS? (The source of my video cannot communicate with FMS directly.)
The answer to your question is quite simply no. This is apparently implemented as a security feature, which is probably less of a security based issue and more of a sales issue. Adobe likes to block certain capabilities intentionally in order to create the possibility of, or need of another product aka more revenue.
I tried looking into this for you to see if there was some dirty hack where you could attach a camera or something and override the binary data being sent to the stream like you can with Audio but unfortunately, to my knowledge, no such hack is possible. More nfo here: NetStream.appendBytes
Update
You might be able to do something hackish by using ManyCam which is a virtual webcam driver (from what I understand). This will provide a valid camera you can select from flash and you can also select a video file as the source file for ManyCam. See http://manycam.com/user_guide/#HowtoSelectaVideofileasthePictureSourceforManyCam
Update #2
If you're looking for something open source that will do the same thing as manycam, check out the following:
http://code.google.com/p/webcamstudio/wiki/VideoSourceMovie (GPL Licensed)