How to find the upper limit of hardware decoder instances for Google Pixel 2 phone - android-mediacodec

Can anyone please tell me how to check how many hardware decoder instances (OMX.qcom.video.decoder.avc)can be created in my android phone (i.e. Google Pixel 2) for decoding H.264 video stream?
How to check this configuration?

There is a file on your phone: /etc/media_codecs.xml wich lists all available codecs and it's 'Quirks', 'Limits' and 'Features'. As of android 6 i think there is a 'Limit' called concurrent-instances. Every Codec should have that value.
E.g. <Limit name="concurrent-instances" max="16" />
This still doesn't guarantee that you can have 16 instances of a specific codec running at the same time as it is also dependent on other factors like bitrate, resolution in conjunction with HW resources. See it as more of an upper most limit of Codecs.
I've seen devices where you could only have a single FHD instance decoder operating at the same time while the concurrent-instances was set to 16. So it is still highly device depended.

Related

Does MediaCodec support variable frame rates

When encoding a video using MediaCodec, I set the encoder configurations like this:
val format = MediaFormat.createVideoFormat(videoMime, width, height)
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface)
format.setInteger(MediaFormat.KEY_BIT_RATE, 4000000)
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, iFrameIntervalSeconds)
format.setInteger(MediaFormat.KEY_FRAME_RATE, frameRate)
This implies that the KEY_FRAME_RATE is a fixed value. But I'm recording a video stream from the web on-the-fly and the frame rate for this stream can vary as the video is being streamed. Does MediaCodec support encoding videos where the frame rate can vary?
The documentation states the following:
For video encoders this value corresponds to the intended frame rate, although encoders are expected to support variable frame rate based on buffer timestamp. This key is not used in the MediaCodec input/output formats, nor by MediaMuxer.
This begs the question as to what the purpose of the KEY_FRAME_RATE is even used for when it states that it is "intended". It also states that encoders are expected to support variable frame rate. Yet Android only ships with its own encoders, so I suppose that means that they do support variable frame rates. But then it says the key is not used by MediaCodec. Like what is that suppose to mean? It seems like it says it supports it, yet doesn't use it. This is very unclear.

comparing h.264 encoding decoding performance

I am beginner of video codec. not an video codec expert
I just want to know base on the same criteria, Comparing H254 encoding/decoding which is more efficiency.
Thanks
Decoding is more efficient. To be useful, decoding must run in real time, where encoding does not (except in videophone / conferencing applications).
How much more efficient? An encoder can generate motion vectors. The more compute power used on generating those motion vectors, the more accurate they are. And, the more accurate they are, the more bandwidth is available for the difference frames, so the quality goes up.
So, the kind of encoding used to generate video for streaming or distribution on DVD or BD discs can run many times slower than real time on server farms. But decoding for that kind of program is useless unless it runs in real time.
Even in the case of real-time encoding it takes more power (actual milliwatts, compute cycles, etc) than decoding.
It's true of H.264, H.265, VP8, VP9, and other video codecs.

Max bitrate value for Google chrome browser

I have a simple question.
What is the current maximum bitrate value supported by Google Chrome browser for web camera ?
For example, if I have a virtual source with high bitrate output (constant bitrate 50 Mbits)
Would I be able to get all 50 Mbits in my Chrome browser when using this device?
Thank you.
The camera's bitrate is irrelevant in this case, since WebRTC is going to encode that information using a video codec that compresses that information anyway.
What matters for WebRTC are 4 separate parameters:
The resolution supplied and the one the other end of the session is capable of receiving
The frame rate supplied and the one the other end of the session is capable of receiving
The network conditions - there's a limit enforced by the network and it is dynamic in nature, so WebRTC will try to estimate it at all times and accommodate to it
The maximum bitrate imposed by the participants
WebRTC in its nature will not limit the amount of bandwidth it takes and will try to use as much as it possibly can. That said, the actual bitrate used even without any limits will still depend on (1), (2) and the type of codec being used. It won't reach 50mbps...
For the most part, 2.5mbps will be enough for almost any type of content in WebRTC. 1080p will take up to 4mbps and 4K probably around 15mbps.

Robust detection of texture capabilities in threejs

I have to load some big textures in a threejs application, using the best the hardware can do.
I use:
var local_size = gl.getParameter(gl.MAX_TEXTURE_SIZE);
To determine what's the maximum size accepted by the device.
On a PC, what's returned by this call matches reality, eg if this call returns 8096 then I can use textures of this size.
On mobile devices though the situation varies quite a lot. For example, on a nexus 5, I get 4096 and a texture of this size does work.
On a Samsung Galaxy tab 3, I get 4096 but if I use this size, I just get a black texture... The maximum size I can use is 2048.
On a Nexus 4, same issue.
In summary : some browsers (at least chrome on Android) just return wrong values regarding the webgl implementation capabilities.
I found, in the Khronos webgl regression test suite, some hints on how to test the actual capabilities.
https://www.khronos.org/registry/webgl/sdk/tests/conformance/limits/gl-max-texture-dimensions.html
However this is a lot of code and may slow down the startup of my application, so (at last) here's my question :
any ideas on how to test the actual texture capabilities of a specific device? Maybe an error condition to test if the texture creation went wrong in Threejs?
Thank you for your help!

HLS - how to reduce delay?

Anyone know how configure the HLS media server for reduce a little bit the delay of live streaming video?
what types of parameters i need to change?
I had heard that you could do some tuning using parameters like this: HLSMediaFileDuration
Thanks in advance
A Http Live Streaming system typically has an encoder which produces segments of a certain number of seconds and a media server (web server) which serves playlists containing a list of URLs to these segments to player applications.
Media Files = Segments = .ts files = MPEG2-TS files (in HLS speak)
There are some ways to reduce the delay in HLS:
Reduce the encoded segment length from Apple's recommended 10 seconds to 5 seconds or less. Reducing segment length increases network overhead and load on the web server.
Use lower bitrates, larger .ts files take longer to upload and download. If you use multi-bitrate streams, make sure the first bitrate listed in the playlist is a little lower than the bitrate most of your users use. This will reduce the time to start playing back the stream
Get the segments from the encoder to the web server faster. Upload while still encoding if possible. Update the playlist as soon as the segment has finished uploading
Also remember that the higher the delay the better the quality of your stream (low delay = lower quality). With larger segments, there is less overhead so more space for video data. Taking a longer time to encode results in better quality. More buffering results in less chance of video streams stuttering on playback.
HLS is all about trading quality of playback for longer delay, so you will never be able to use HLS for things like video conferencing. Typical delay in HLS is 30-60 sec, minimum in practice is around 15 sec. If you want low delay use RTP for streaming, but good luck getting good quality on low or variable speed networks.
Please specify which media server you use. Generally speaking, yes - changing chunk size will definitely affect delay time. The less is the first chunk, the quicker the video will be shown in the player.
Actually, Apple recommend to divide your file to small chunks this equal length of file, integers.
In practice, there is huge difference between players. Some of them parse manifest changing this values.
Known practice is to pre-cache in memory first chunks in low & medium resolution (Or try to download them in background of app/page - Amazon does this, though their video is MSS)
I was having the same problem and the keys for me were:
Lower the segment length. I set it to 2s because I'm streaming on a local network. In other type of networks, you need to be careful with the overhead that a low segment length adds that can impact your playback quality.
In your manifest, make sure the #EXT-X-TARGETDURATION is accurate. From here:
The EXT-X-TARGETDURATION tag specifies the maximum Media Segment
duration. The EXTINF duration of each Media Segment in the Playlist
file, when rounded to the nearest integer, MUST be less than or equal
to the target duration; longer segments can trigger playback stalls
or other errors. It applies to the entire Playlist file.
For some reason, the #EXT-X-TARGETDURATION in my manifest was set to 5 and I was seeing a 16-20s delay. After changing that value to 2, which is the correct one according to my segments' length, I am now seeing delays of 6-10s.
In summary, you should expect a delay of at least 3X your #EXT-X-TARGETDURATION. So, lowering the segment length and the #EXT-X-TARGETDURATION value should help reducing the delay.