Is a single integer considered an "audio frame" in MediaCodec? - android-mediacodec

I read the following from the official docs on MediaCodec:
Raw audio buffers contain entire frames of PCM audio data, which is one sample for each channel in channel order. Each PCM audio sample is either a 16 bit signed integer or a float, in native byte order.
https://source.android.com/devices/graphics/arch-sh
The way I read this is that a buffer contains an entire frame of audio but a frame is just one signed integer. This doesn't seem to make sense. Or is this two values for the left and right audio? Why call it a buffer when it only contains a single value? To me, a buffer refers to several values spanning several milliseconds.

Here's what the docs for AudioFormat say:
For linear PCM, an audio frame consists of a set of samples captured at the same time, whose count and channel association are given by the channel mask, and whose sample contents are specified by the encoding. For example, a stereo 16 bit PCM frame consists of two 16 bit linear PCM samples, with a frame size of 4 bytes.
You are right that it doesn't make sense to use a buffer for just one frame. And in practice buffers are filled with many frames.
You can figure out the number of frames in a buffer from the size property of MediaCodec.BufferInfo and the frame size.

Related

Identifying frame type from h264 bitstream over RTP

I need to identify each frame type within a h264 bitstream over RTP. So far, I've been able to identify basically everything that wireshark can detect from the bitstream, including:
Sequence parameter set
NAL and Fragmentation unit headers, including the nal_ref_idc, and nal type
NAL Unit payload, including the slice_type bit.
From what I understand, nal_ref_idc can be combined with the slice_type bit to identify the slice_type - that is, I, P or B. But I'm struggling to understand how that is identified.
Finally, I'm not sure how to identify the type of the frame from this. At first I thought that the slices were the same as frames, but that isn't the case. How can I tell, or estimate which slices belong to the same frame, and then identify the frame type?
Thanks!

In pyaudio, how can I get a stream to 'consistently' produce sound?

I am using an endless while loop to convert byte data (used in single chunks), to an integer value (in order to 'manipulate' those values and then reconvert the values back to bytes again and write the bytes to the PyAudio stream (for sound).
Everything plays smoothly until I write a complex function that takes up too much processing time. Then, I hear a bunch of pops, snaps and clicks over the audio. I notice that the reason for this happening, is because (between the time that the program loops to play the next chunk of streamed data provided to the PyAudio stream), there is a 'transition' of silence as there is a 'wait' for the loop to repeat... and that is what creates the pops between each chunk being played if the loop is 'too slow'.
Is there a way to make the 'voltage' going to the speakers at 'constant' based on the last data value provided to the PyAudio's stream? This would be a great way to smoothen out the 'pops', 'snaps' and 'clicks' when playing sound, instead of there being silence until the next value is passed to the stream. The reason I don't use a chunk greater than 1, is because I want to do many 'creative' things with PyAudio (through an endless loop), and have values inside the loop determine the 'voltage level' going to the speakers.

Zero-padded h264 in mdat

I'd like to do some stuff with h.264 data recorded from Android phone.
My colleague told me there should be 4 bytes right after mdat wich specifies NALU size, then one byte with NALU metadata and then the raw data, and then (after NALU size), another 4 bytes with another NALU size and so on.
But I have a lot of zeros right after mdat:
0000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000
0e00000000000000000000000000000000000000000000000000000000000000
8100000000000000000000000000000000000000000000000000000000000000
65b84f0f87fa890001022e7fcc9feef3e7fabb8e0007a34000f2bbefffd07c3c
bfffff08fbfefff04355f8c47bdfd05fd57b1c67c4003e89fe1fe839705a699d
c6532fb7ecacbfffe82d3fefc718d15ffffbc141499731e666f1e4c5cce8732f
bf7eb0a8bd49cd02637007d07d938fd767cae34249773bf4418e893969b8eb2c
Before mdat atom are just ftyp mp42, isom mp42 and free atoms. All other atoms (moov, ...) are at the end of the file (that's what Android does, when it writes to socket and not to the file). But If necessary, I've got PPS and SPS from other file with same camera and encoder settings recorded just a seond before this, just to get those PPS and SPS data.
So how exactly can i get NALUs from that?
You can't. The moov atom contains information required to parse the mdat. Without it the mdat has little value. For instance, the first NALU does not need to start at the begining of the mdat, It can start anywhere within the mdat. The byte it starts at is recorded in (I believe) the stco box. If the file has audio, you will find audio and video mixed within mdat with no way to determine what is what without the chunk offsets. In addition, if the video has B frames, there is no way to determine render order without the cts, again only available in the moov. And Technically, the nalu size does not need to be 4 bytes and you cant know that without the moov. I recommend not used mp4. Use a streamable container such as ts or flv. Now if you can make some assumption about the code that is producing the file; Like the chunk offset is always the same, and there is no b frames, you can hard code these values. But is not guaranteed to work after a software update.

x264 rate control modes

Recently I am reading the x264 source codes. Mostly, I concern the RC part. And I am confused about the parameters --bitrate and --vbv-maxrate. When bitrate is set, the CBR mode is used in frame level. If you want to start the MB level RC, the parameters bitrate, vbv-maxrate and vbv-bufsize should be set. But I don't know the relationship between bitrate and vbv-maxrate. What is the criterion of the real encoding result when bitrate and vbv-maxrate are both set?
And what is the recommended value for bitrate? Equals to vbv-maxrate?
Also what is the recommended value for vbv-bufsize? Half of vbv-maxrate?
Please give me some advice.
bitrate address the "target filesize" when you are doing encoding. It is understandably confusing because it applies a "budget" of certain size and then tries to apportion this budget on the frames - that is why the later parts of a movie get a smaller amount of data which results in lower video quality. For example, if you have 10 seconds of complete black images followed by 10 second of natural video - the final encoded file will be very different than if the order was the opposite.
vbv-bufsize is the buffer that has to be completed before a "transmission" would occur say in a streaming scenario. Now, let's tie this to I-frames and P-frames: the vbv-bufsize will limit the size of any of your encoded video frames - most likely the I-frame.

How to encode a stream of RGBA values to video?

More specifically:
I have a sequence of 32 bit unsigned RGBA integers for pixels- e.g. 640 integers per row starting at the left pixel, 480 rows per frame starting at the top row, repeat for n frames. Is there an easy way to feed this to ffmpeg (or some other encoder) without first encoding it to a common image format?
I'm assuming ffmpeg is the best tool for me to use in this case, but I'm open to suggestions (the output video format doesn't matter too much).
I know the documentation would enlighten me if I just knew the right keywords... In case I'm asking the wrong question, here's what I'm trying to do at the highest level:
I have some Actionscript code that draws and animates on the display tree, and I've wrapped it in an AIR application that draws BitmapData frame-by-frame. AIR has proved to be woefully inefficient at directly encoding this output- the best I've managed is a few frames per second, and I need to render at least 15 fps, preferably more like 100 fps, which I get out of ffmpeg when I feed it PNG images (AIR can take 1+ seconds to encode one 640x480 png... appalling). Instead of encoding inside AIR I can send the raw byte data out to an encoder or to disk as fast as it's rendered.
If you're wondering why I'm using Actionscript to render an animation or why it has to be encoded quickly, don't. Suffice it to say, the frames are computed at execution time (not stored as an animation in a .swf file, for example), I have a very large amount of video to create and limited time to do so, and using something other than Actionscript to produce the frames is not an option.
The solution I've come up with is to use x264 instead of ffmpeg.
For testing purposes, I saved frames as files: 00.bin, 01.bin, .. nn.bin, containing 640x480x4 ARGB pixel values. The command I used to verify that the approach is feasible is the following horrible hack:
cat *.bin | \
perl -e 'while (sysread(STDIN,$d,4)){print pack("N",unpack("V",$d));}' | \
x264 --demuxer raw --input-csp bgra --fps 15 --input-res 640x480 --qp 0 \
--muxer flv -o out.flv -
The ugly perl snippet in there is a hack to swap four-byte endian order, since x265 can only take BGRA and my test files contained ARGB.
In a nutshell,
Actionscript renders ARGB values into ByteArray
swap the endian to BGRA
pipe it to x264: raw demuxer, bgra colorspace, specify fps/w/h/quality
??
profit.