I have H.264 stream stored as a file. I am trying to create a MPEG4 file by adding this stream to MDAT BOX. I have created other headers required by MPEG-4 standards. But I am unable to read the MPEG-4 file.
I parsed the H.264 stream and I see that there are multiple I-frames in the file. It seems to me that this is fragmented H.264 stream.
Is there any way in which this fragmented H.264 stream can be combined into a single I-frame?
I have gone through the link Problem to Decode H264 video over RTP with ffmpeg (libavcodec).
I implemented what was mentioned in the link but i am still unable to run the MPEG-4 thus created.
With the above technique, I get the fragmentType = 5. I get the following nalTypes (8, 2, 1, 0, 0, ...). I get the startBit as specified and for the other fragments, I get the 00 (for StartBit|endBit). I do not get the endBit.
When i try using FFMPEG to reconvert the MPEG-4 file that was created, i get the following error: "header damaged". It looks like the reconstruction of IDR frames is not working properly.
Please let me know if the method that I am following has any issues.
The H.264 stream file is around 100KB. When this file is converted to MP4 using FFMPEG, I get around 38KB. Does it mean that FFMPEG is re-encoding the file once again inorder to recreate the MP4 file?
With the above technique that was mentioned in the link, the MP4 that I have created is around 100KB itself.
Please let me know what I am doing that is wrong.
Thanks in advance.
It sounds like you'd like to wrap an H.264 elementary stream in a mp4 container so that you can play it back.
A tool like mp4box (http://gpac.wp.mines-telecom.fr/mp4box/) will enable you to wrap you elementary stream into a mp4 file. For example:
mp4box -add MySourcFile.h264 MyDestFile.mp4
Related
Background:
I am trying to build my own RTMP server for live streaming. (Python3+)
I have succeeded to establish an RTMP connection between the server and the client (using OBS). The client sends me the video and audio data. (audio codec: AAC, video codec: AVC).
I also built a website (HTTP), with HLS (hls.js) player (.TS & .M3U8) that works properly.
Goal:
What I need to do is to take the video and audio data (bytes) from the RTMP connection and join them into a .ts file container so it could be playable on my website.
Plan:
I planned to join them into an FLV file and then convert it to TS using FFmpeg. which actually worked with the audio data because FLV support AAC, so I generated FLV files that contained audio-only.
Problem:
The problem started when I tried to add the video data into the FLV file, but it is not possible because FLV does not support AVC.
What I tried to do:
I can choose another file container to work with, that supports AVC/AAC but they all seem really complicated and I don't have much time for this.
I tried to find a way to convert the AVC to H.263 and then put it inside the FLV (Because it does support H263) but I couldn't find a way.
I really need help. Thanks in advance!
I'm currently write rtmp server to receive rtmp stream, then record to multi flv file, segment base on time.
Example: 1 minute -> 1 flv file, 2m -> 2 flv file...
Problem: only the first flv file is playable, from the second onwards, they are not playable, maybe they miss some metadata of the codec (h264).
How can I resolve that problem?
Yes, there is a header, as well as sequence headers depending on the codec used. The segments must also be split on keyframes. The FLV formate is well documented here https://www.adobe.com/content/dam/acom/en/devnet/flv/video_file_format_spec_v10.pdf
My end game is to take read a raw video from a file into avconv, h.264 encode it, and pipe it to VLC. However, I cannot seem to get it to work. Even just piping an already encoded video to VLC does not work. Trying:
avconv -i test.mp4 -f h264 - | vlc -
appears to be encoding a video (the cmd line output looks like it is processing frame by frame), but nothing is displayed to VLC. A similar test with an .avi works fine:
avconv -i test.avi -f avi - | vlc -
Is there something different special piping out h264 encoded video?
Specify the demuxer:
cat test.h264 | vlc --demux h264 -
--demux=<string> Demux module
Demultiplexers are used to separate the "elementary" streams (like
audio and video streams). You can use it if the correct demuxer is
not automatically detected. You should not set this as a global
option unless you really know what you are doing.
VLC command line help
I am using live555 for receiving network camera video via RTSP, which data is H264 encoded. Is there any open source software for decoding the received packets and parse it into different video frames?
Best regards,
Dídac Pérez
Yes, ffmpeg can decode the data. Infact you can use ffmpeg directly recieve the data, transcode /transform it to your desired form and send it out again or dump it into a file if you wish. If you want to use live555 for recieving it and ffmpeg for decoding simply write output of live555 to a pipe and feed it to ffmpeg to do the decoding.
I'm confused by your question. live 555 is built on top of ffmpeg, so it has all access to the decoders built into ffmpeg.
I have an mpg file which I want to convert to flv format, but I have a requirement that while converting the mpg file, I also have to simultaneously play the converted flv file in the flash cs3. How to do it? I am using cs3 and as3.
If you want to convert your files programmaticly then use ffmpeg. This is a commandline tool which can convert video files to nearly everything. You have to execute ffmpeg with the correct params and wait until the video is ready. This works only on serverside. Means the flash client loads up the video file to the server. There it gets converted. You can execute ffmpeg with any serverside language like php.
Sadly I have no idea if it is possible to watch the video while converting. I think not but maybe someone else knows more.