Max bitrate value for Google chrome browser - google-chrome

I have a simple question.
What is the current maximum bitrate value supported by Google Chrome browser for web camera ?
For example, if I have a virtual source with high bitrate output (constant bitrate 50 Mbits)
Would I be able to get all 50 Mbits in my Chrome browser when using this device?
Thank you.

The camera's bitrate is irrelevant in this case, since WebRTC is going to encode that information using a video codec that compresses that information anyway.
What matters for WebRTC are 4 separate parameters:
The resolution supplied and the one the other end of the session is capable of receiving
The frame rate supplied and the one the other end of the session is capable of receiving
The network conditions - there's a limit enforced by the network and it is dynamic in nature, so WebRTC will try to estimate it at all times and accommodate to it
The maximum bitrate imposed by the participants
WebRTC in its nature will not limit the amount of bandwidth it takes and will try to use as much as it possibly can. That said, the actual bitrate used even without any limits will still depend on (1), (2) and the type of codec being used. It won't reach 50mbps...
For the most part, 2.5mbps will be enough for almost any type of content in WebRTC. 1080p will take up to 4mbps and 4K probably around 15mbps.

Related

How chrome://webrtc-internal measures the round trip time?

I have been analyzing the JSON file generated using chrome://webrtc-internal, while running webrtc on 2 PCS.
I looked at Stats API to verify how webrtc-internal computes the round trip time (RTT).
I found 2 ways:
RTC Remote Inbound RTP Video Stream that contains roundTripTime
RTC IceCandidate Pair that contains currentRoundTripTime.
Which one is accurate, why, and how is it computed?
Is RTT computed on a frame-by-frame basis?
Is it computed one way (sender --> receiver), or two ways (sender --> receiver--> sender)?
Which reports are used to measure the RTT? Is it Receiver Report RTCP or Sender Report RTCP?
What is the size of the length of GOP in the Webrtc VP8 codec?
RTCIceCandidatePairStats.currentRoundTripTime is computed by how long it takes for the remote peer to respond to STUN Binding Request. The WebRTC ICE Agent sends these on an interval, and each messages has a TransactionID.
RTCRemoteInboundRtpStreamStats.currentRoundTripTime is computed by how long since the last SenderReport has been received. The sender knows when it sent, so it is able to compute how long it took to arrive.
They are both accurate. Personally I use the ICE stats since there is less overhead. The packet doesn't have to be decrypted and routed through the RTCP subsystem. IMO ICE is also easier to deal with then RTCP.
What is the size of the length of GOP in the Webrtc VP8 codec?. It depends on what is being encoded and the settings. Do you have a low keyframe interval? Are you encoding something with lots of changes? What are you trying to determine with this question?

How to find the upper limit of hardware decoder instances for Google Pixel 2 phone

Can anyone please tell me how to check how many hardware decoder instances (OMX.qcom.video.decoder.avc)can be created in my android phone (i.e. Google Pixel 2) for decoding H.264 video stream?
How to check this configuration?
There is a file on your phone: /etc/media_codecs.xml wich lists all available codecs and it's 'Quirks', 'Limits' and 'Features'. As of android 6 i think there is a 'Limit' called concurrent-instances. Every Codec should have that value.
E.g. <Limit name="concurrent-instances" max="16" />
This still doesn't guarantee that you can have 16 instances of a specific codec running at the same time as it is also dependent on other factors like bitrate, resolution in conjunction with HW resources. See it as more of an upper most limit of Codecs.
I've seen devices where you could only have a single FHD instance decoder operating at the same time while the concurrent-instances was set to 16. So it is still highly device depended.

HLS - how to reduce delay?

Anyone know how configure the HLS media server for reduce a little bit the delay of live streaming video?
what types of parameters i need to change?
I had heard that you could do some tuning using parameters like this: HLSMediaFileDuration
Thanks in advance
A Http Live Streaming system typically has an encoder which produces segments of a certain number of seconds and a media server (web server) which serves playlists containing a list of URLs to these segments to player applications.
Media Files = Segments = .ts files = MPEG2-TS files (in HLS speak)
There are some ways to reduce the delay in HLS:
Reduce the encoded segment length from Apple's recommended 10 seconds to 5 seconds or less. Reducing segment length increases network overhead and load on the web server.
Use lower bitrates, larger .ts files take longer to upload and download. If you use multi-bitrate streams, make sure the first bitrate listed in the playlist is a little lower than the bitrate most of your users use. This will reduce the time to start playing back the stream
Get the segments from the encoder to the web server faster. Upload while still encoding if possible. Update the playlist as soon as the segment has finished uploading
Also remember that the higher the delay the better the quality of your stream (low delay = lower quality). With larger segments, there is less overhead so more space for video data. Taking a longer time to encode results in better quality. More buffering results in less chance of video streams stuttering on playback.
HLS is all about trading quality of playback for longer delay, so you will never be able to use HLS for things like video conferencing. Typical delay in HLS is 30-60 sec, minimum in practice is around 15 sec. If you want low delay use RTP for streaming, but good luck getting good quality on low or variable speed networks.
Please specify which media server you use. Generally speaking, yes - changing chunk size will definitely affect delay time. The less is the first chunk, the quicker the video will be shown in the player.
Actually, Apple recommend to divide your file to small chunks this equal length of file, integers.
In practice, there is huge difference between players. Some of them parse manifest changing this values.
Known practice is to pre-cache in memory first chunks in low & medium resolution (Or try to download them in background of app/page - Amazon does this, though their video is MSS)
I was having the same problem and the keys for me were:
Lower the segment length. I set it to 2s because I'm streaming on a local network. In other type of networks, you need to be careful with the overhead that a low segment length adds that can impact your playback quality.
In your manifest, make sure the #EXT-X-TARGETDURATION is accurate. From here:
The EXT-X-TARGETDURATION tag specifies the maximum Media Segment
duration. The EXTINF duration of each Media Segment in the Playlist
file, when rounded to the nearest integer, MUST be less than or equal
to the target duration; longer segments can trigger playback stalls
or other errors. It applies to the entire Playlist file.
For some reason, the #EXT-X-TARGETDURATION in my manifest was set to 5 and I was seeing a 16-20s delay. After changing that value to 2, which is the correct one according to my segments' length, I am now seeing delays of 6-10s.
In summary, you should expect a delay of at least 3X your #EXT-X-TARGETDURATION. So, lowering the segment length and the #EXT-X-TARGETDURATION value should help reducing the delay.

WebRTC: peer connections limit?

How many peer connections can I create on a single client? Is there any limit?
I assume you've arrived at 256 experimentally since there is currently no documentation/specs to suggest it. I don't know exactly how things have changed since 2013, but currently, my own experiments cap out at 500 simultaneous connections per page. As far as I can tell, Firefox has no such limit.
The real limit, according to Chromium source code is 500 (source). As far as I can tell, there was no limit before this was implemented (source) even going as far back as the WebKit days.
I think one reason that it can be tricky to keep track of is that Chrome (and FF for that matter) have always been bad at the garbage collection of dead connections. If you check chrome://webrtc-internals (FF equivalent: about:webrtc), there will often be a build-up of zombie connections that count towards the 500 limit. These persist until you manually destroy them, or close/refresh the page. One way to work around this is through your own heartbeat implementation or using the signalling server to notify of peers disconnecting such that other peers can destroy their connection (although this requires a persistent connection to a signalling server).
Maximum peer connections limit is 256 (on chrome).
Not sure about other major browsers, depending on your bandwidth they are limited to give certain stability.
Not sure if there is any hard limit(other than runtime memory), but there is certainly soft one.
If you are considering fully mesh topology(app in which every client is connected to every other client), then you have to consider main deficiency of this topology. For large number of participants in video conference session bandwidth which is required to sustain the overall session grows for each new participant.
Therefore, users with low bandwidth will not be able to handle video conference session with big number of participants.
Hope it helps.
This is an interesting topic.. I was just watching this youtube video about Multi Peer in WebRTC. The presenters said it just depend on the number of peers, but the highest he did was on less than 6 peers. Also this depends on your bandwidth size. The best thing you can do is to develop an WebRTC and try connecting with your friends and judge as this also depends on the country you are in. Like I live in Botswana and the network is not fast so i wont expect to be having 6 peers while I am still suffering to get a clear communication with only one person this side.
According to this source:
In practice, even under optimal network conditions, a mesh video call doesn’t work well beyond five participants.

Flex/Actionscript determine if NetStream has audio by analysing audioBytesPerSecond

I need to "look" at a NetStream and determine if I'm receiving audio. From what I investigated, i may use the property audioBytesPerSecond from NetStreamInfo:
"(audioBytesPerSecond) Specifies the rate at which the NetStream audio
buffer is filled in bytes per second. The value is calculated as a
smooth average for the audio data received in the last second."
I also learned that NetStream may have contain some overhead bytes from the network so, which is the minimum reasonable audioBytesPerSecond value to determine if NetStream is playing audio (and not just noise, for example)?
Can this analysis be done this way?
Thanks in advance!
Yes you can do it this way. It's rather subjective, however.
Try to find a threshold that works for you. We used 5 kilobits/sec in the past. If the amount of data falls below this value, they are likely not sending any audio. Note, we were using the stream.info.byteCount property (you might want a slightly lower value if you're using auiodBytesPerSecond).
This is pretty easy to observe if you speak into the microphone and periodically check audioBytesPerSecond or the other counters/statistics that are available.