Mediacodec decoder issues in MTK platform? - android-mediacodec

I'm working on a project of android video player, which required decode frame in realtime. I use the Mediacodec framework for decoding. I test my app in some device, and then I found that the Mediacodec decoder in MTK Platform will block serval frames in their own buffers, which leads the frame decode latency is much bigger and cannot achieve the . For example, the decoder would dequeueOutputBuffer a frame after I dequeueInputBuffer four frames. Is there any solutions to reduce the decode latency?

Related

What data is needed to be trained when detecting videos by object detection model

I have a task to detect illegal parking,I want to use object detection models like YOLO, SDD and others to detect them on video,Can i train it by video? or all frame from video? or any frame from video that are not similar?

Does MediaCodec support variable frame rates

When encoding a video using MediaCodec, I set the encoder configurations like this:
val format = MediaFormat.createVideoFormat(videoMime, width, height)
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface)
format.setInteger(MediaFormat.KEY_BIT_RATE, 4000000)
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, iFrameIntervalSeconds)
format.setInteger(MediaFormat.KEY_FRAME_RATE, frameRate)
This implies that the KEY_FRAME_RATE is a fixed value. But I'm recording a video stream from the web on-the-fly and the frame rate for this stream can vary as the video is being streamed. Does MediaCodec support encoding videos where the frame rate can vary?
The documentation states the following:
For video encoders this value corresponds to the intended frame rate, although encoders are expected to support variable frame rate based on buffer timestamp. This key is not used in the MediaCodec input/output formats, nor by MediaMuxer.
This begs the question as to what the purpose of the KEY_FRAME_RATE is even used for when it states that it is "intended". It also states that encoders are expected to support variable frame rate. Yet Android only ships with its own encoders, so I suppose that means that they do support variable frame rates. But then it says the key is not used by MediaCodec. Like what is that suppose to mean? It seems like it says it supports it, yet doesn't use it. This is very unclear.

comparing h.264 encoding decoding performance

I am beginner of video codec. not an video codec expert
I just want to know base on the same criteria, Comparing H254 encoding/decoding which is more efficiency.
Thanks
Decoding is more efficient. To be useful, decoding must run in real time, where encoding does not (except in videophone / conferencing applications).
How much more efficient? An encoder can generate motion vectors. The more compute power used on generating those motion vectors, the more accurate they are. And, the more accurate they are, the more bandwidth is available for the difference frames, so the quality goes up.
So, the kind of encoding used to generate video for streaming or distribution on DVD or BD discs can run many times slower than real time on server farms. But decoding for that kind of program is useless unless it runs in real time.
Even in the case of real-time encoding it takes more power (actual milliwatts, compute cycles, etc) than decoding.
It's true of H.264, H.265, VP8, VP9, and other video codecs.

How to find the upper limit of hardware decoder instances for Google Pixel 2 phone

Can anyone please tell me how to check how many hardware decoder instances (OMX.qcom.video.decoder.avc)can be created in my android phone (i.e. Google Pixel 2) for decoding H.264 video stream?
How to check this configuration?
There is a file on your phone: /etc/media_codecs.xml wich lists all available codecs and it's 'Quirks', 'Limits' and 'Features'. As of android 6 i think there is a 'Limit' called concurrent-instances. Every Codec should have that value.
E.g. <Limit name="concurrent-instances" max="16" />
This still doesn't guarantee that you can have 16 instances of a specific codec running at the same time as it is also dependent on other factors like bitrate, resolution in conjunction with HW resources. See it as more of an upper most limit of Codecs.
I've seen devices where you could only have a single FHD instance decoder operating at the same time while the concurrent-instances was set to 16. So it is still highly device depended.

Obtain the result ByteArray of the current playing sounds

I am developing an AIR application for desktop that simulate a drum set. Pressing the keyboard will result in a corresponding drum sound played in the application. I have placed music notes in the application so the user will try to play a particular song.
Now I want to record the whole performance and export it to a video file, say flv. I have already succeed in recording the video using this encoder:
http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
However, this encoder does not have the ability to record sound automatically. I need to find a way to get the sound in ByteArray at that particular frame, and pass it to the encoder. Each frame may have different Sound objects playing at the same time, so I need to get the bytes of the final sound.
I am aware that SoundMixer.computeSpectrum() can return the current sound in bytes. However, the ByteArray returned has a fixed length of 512, which does not fit in the requirement of the encoder. After a bit of testing, with a sample rate 44khz 8 bit stero, the encoder expects the audio byte data array to have a length of 5880. The data returned by SoundMixer.computeSpectrum() is much much shorter than the encoder required.
My application is running at 60FPS, and recording at 15FPS.
So my question is: Is there any way I can obtain the audio bytes in the current frame, which is mixed by more than one Sound objects, and has the data length enough for the encoder to work? If there is no API to do that, I will have to mix the audio and get the result bytes by myself, how can that be done?