How to detect GOP struct from H264/HEVC bitstream?Can I get it from sps info? - h.264

Is it possible to get gop struct from sps info? I need to know the num of b frames in a gop.

No, its not possible without parsing the entire stream as ever GOP can have a different number of b frames.

Related

Identifying frame type from h264 bitstream over RTP

I need to identify each frame type within a h264 bitstream over RTP. So far, I've been able to identify basically everything that wireshark can detect from the bitstream, including:
Sequence parameter set
NAL and Fragmentation unit headers, including the nal_ref_idc, and nal type
NAL Unit payload, including the slice_type bit.
From what I understand, nal_ref_idc can be combined with the slice_type bit to identify the slice_type - that is, I, P or B. But I'm struggling to understand how that is identified.
Finally, I'm not sure how to identify the type of the frame from this. At first I thought that the slices were the same as frames, but that isn't the case. How can I tell, or estimate which slices belong to the same frame, and then identify the frame type?
Thanks!

Where can I cut a H.264 video stream without re-compressing?

I am trying to cut a H.264 video stream without decoding and re-encoding. To find a cutting point in the video stream:
Do I first detect the I-frame and then capture the video for the desired time?
Am I right or I have to look for a combination of I, P, and B frames?
Typically H.264 bitstreams start with a sequence parameter set (SPS), a picture parameter set (PPS), followed by an IDR frame in H.264 bitstream which is then followed by other arbitrary frames (P, B, etc). The parameter sets are required to initialise the decoder correctly.
Therefore to be able to decode each segment you're cutting, each segment should ideally begin with the parameter sets, but whether the each IDR is preceded by parameter sets is both codec and codec configuration dependent.
You'll be able to determine your requirements by looking at the NAL unit types of the bitstream you're wanting to cut.
However it is possible to supply a decoder out of band with the SPS and PPS. In that case they would be able to decode the bitstream starting at an IDR.
You don't have to look for combinations of I, P, B frames, just make sure you have the parameter sets, and your segment begins with an IDR.

Zero-padded h264 in mdat

I'd like to do some stuff with h.264 data recorded from Android phone.
My colleague told me there should be 4 bytes right after mdat wich specifies NALU size, then one byte with NALU metadata and then the raw data, and then (after NALU size), another 4 bytes with another NALU size and so on.
But I have a lot of zeros right after mdat:
0000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000
0e00000000000000000000000000000000000000000000000000000000000000
8100000000000000000000000000000000000000000000000000000000000000
65b84f0f87fa890001022e7fcc9feef3e7fabb8e0007a34000f2bbefffd07c3c
bfffff08fbfefff04355f8c47bdfd05fd57b1c67c4003e89fe1fe839705a699d
c6532fb7ecacbfffe82d3fefc718d15ffffbc141499731e666f1e4c5cce8732f
bf7eb0a8bd49cd02637007d07d938fd767cae34249773bf4418e893969b8eb2c
Before mdat atom are just ftyp mp42, isom mp42 and free atoms. All other atoms (moov, ...) are at the end of the file (that's what Android does, when it writes to socket and not to the file). But If necessary, I've got PPS and SPS from other file with same camera and encoder settings recorded just a seond before this, just to get those PPS and SPS data.
So how exactly can i get NALUs from that?
You can't. The moov atom contains information required to parse the mdat. Without it the mdat has little value. For instance, the first NALU does not need to start at the begining of the mdat, It can start anywhere within the mdat. The byte it starts at is recorded in (I believe) the stco box. If the file has audio, you will find audio and video mixed within mdat with no way to determine what is what without the chunk offsets. In addition, if the video has B frames, there is no way to determine render order without the cts, again only available in the moov. And Technically, the nalu size does not need to be 4 bytes and you cant know that without the moov. I recommend not used mp4. Use a streamable container such as ts or flv. Now if you can make some assumption about the code that is producing the file; Like the chunk offset is always the same, and there is no b frames, you can hard code these values. But is not guaranteed to work after a software update.

validation of single h264 AVC nal unit

I have extracted several nal units from hard disk. I want to know which of them is valid nal unit or not. Is there any tool or code that can validate the structure or syntax of single h264 AVC nal unit.
It depends. First you need to figure out what the NAL type is by the first byte. If the NAL is an SPS or PPS you can basically decode that as-is and see if the result is sane.
If the NAL is an actual coded slice, you will need at least three NALs to decode it. The corresponding SPS, PPS and the coded slice. You can decode the first few elements of the slice header without the SPS and PPS, but then you would need the corresponding SPS and PPS based on the PPS ID in the slice header to continue.
There were some command line tools (maybe h264_parse) that would dump this type of header information for you, or you can hack the reference decoder to help you out.
http://iphome.hhi.de/suehring/tml/
In the end the only way to know if your NAL is "good" is to either match it up with the bitstream you started out with or fully decode it and verify the resulting picture output as bit-exact.
Checking the NAL byte length and maybe a checksum or CRC of each NAL can be helpful too, but no such mechanism exists in the bitstream, you'd have to add that on.

Why do we transmit the IBP frames out of order?

We transmit the IBP frames like IPBBPBB and then we display them in IBBPBBP. This is the question, why do we do that. I can't just visualize it in my head. I mean, why not just transmit them in the order they are to be displayed?.
With bi-directional frames in temporal compression, decoding order (order in which the data needs to be transmitted for sequential decoding) is different from presentation order. This explains the effect you are referring to.
On the picture below, you need data for frame P2 to decode frame B1, so when it comes to transmission, P2 goes ahead.
See more on this: B-Frames in DirectShow
(source: monogram.sk)
Since MPEG-2 had appeared a new frame type was introduced - the bi-directionally predicted frame - B frame. As the name suggests the frame is derived from at least two other frames - one from the past and one from the future (Figure 2).
Since the B1 frame is derived from I0 and P2 frames both of them must be available to the decoder prior to the start of the decoding of B1. This means the transmission/decoding order is not the same as the presentation order. That’s why there are two types of time stamps for video frames - PTS (presentation time stamp) and DTS (decoding time stamp).
Briefly: it's done for decoder's speedup. You've mentioned usual MPEG2 GOP (Group of Pictures), so I'll try to explain answer for MPEG2. Though, H264 uses absolutely same logic.
For coder, picture difference is less, when calculated using not only previous frames, but successive frames too. That's why coder (usually) processes frames in display order. So, in IBBPBBP GOP every B frame can use previous I frame & next P frame to make prediction.
For decoder, it's better when every successive frame uses only previous frames for prediction. That's why pictures in bitstream are reordered. In reordered group IPBBPBB, every B frame uses I frame & P frame, which both are previous, and that's faster.
Also, every frame has it's own PTS (presentation timestamp), that implicitly determines display order - so it's no big deal that reordering is made.
This wikipedia article can give you answer to your question.