Whats the difference between stream generated by ffmpeg vs lib VLC - html

I am trying to stream a Mp4 file to a webm file.
After that I am reading this file chunk by chunk and feeding it to HTML5 viewer
(video tag of html 5 viewer)
in order to stream from MP4 file webm file I have had three options
1) Stream out using VLC media player application
2) Stream using libVLC through C code
How to stream video using C/C++
3) stream using ffmpeg commandline
ffmpeg -i test.mp4 -c:v libvpx -c:a libvorbis -pix_fmt yuv420p -quality good output.webm
While comsuming this webm generataed by all three options. 1st and 2nd is not working. While 3rd one is working. 1st and 2nd works only after streaming to file is completed and when last chunk of output file is fed to html5 video player.
It seems vlcplayer and libVLC is not generating the required fragments with keyframes that are generated by ffmpeg.
Is there anyway we can instruct libVLC or VLCplayer also to generated fragments with keydrame info ?

Related

AAC & AVC byte stream data into ts file

Background:
I am trying to build my own RTMP server for live streaming. (Python3+)
I have succeeded to establish an RTMP connection between the server and the client (using OBS). The client sends me the video and audio data. (audio codec: AAC, video codec: AVC).
I also built a website (HTTP), with HLS (hls.js) player (.TS & .M3U8) that works properly.
Goal:
What I need to do is to take the video and audio data (bytes) from the RTMP connection and join them into a .ts file container so it could be playable on my website.
Plan:
I planned to join them into an FLV file and then convert it to TS using FFmpeg. which actually worked with the audio data because FLV support AAC, so I generated FLV files that contained audio-only.
Problem:
The problem started when I tried to add the video data into the FLV file, but it is not possible because FLV does not support AVC.
What I tried to do:
I can choose another file container to work with, that supports AVC/AAC but they all seem really complicated and I don't have much time for this.
I tried to find a way to convert the AVC to H.263 and then put it inside the FLV (Because it does support H263) but I couldn't find a way.
I really need help. Thanks in advance!

audio/mp4; codecs="mp4a.40.2" not playing in Chrome and Firefox

It seems I want to convert audios, which I want to stream on my website, to audio/mp4; codecs="mp4a.40.2".
Using ffmpeg-cli-wrapper, I am converting my uploaded audio files with this command here:
ffmpeg -i /tmp/input.any -acodec aac -b:a 256000 /tmp/output.aac
On the client I am creating a SourceBuffer like this:
this.sourceBuffer = this.mediaSource.addSourceBuffer('audio/mp4; codecs="mp4a.40.2"');
The errors are:
Chrome:
NotSupportedError: Failed to load because no supported source was found.
Firefox:
NotSupportedError: The media resource indicated by the src attribute or assigned media provider object was not suitable.
Here comes the fun part:
If I create the SourceBuffer using audio/aac as mime-type:
this.sourceBuffer = this.mediaSource.addSourceBuffer('audio/aac');
the audio gets played correctly on Chrome but Firefox says:
MediaSource.addSourceBuffer: Type not supported in MediaSource
Update
After changing the command to
ffmpeg -i /tmp/input.any -acodec aac -b:a 256000 /tmp/output.mp4
^^^
Chrome/Firefox do not give an error when using audio/mp4; codecs="mp4a.40.2", but the audio is not being played.
See
https://stackoverflow.com/a/64432478/826983
In your ffmpeg command, use the extension .m4a (or .mp4), not .aac:
ffmpeg -i /tmp/input.any -acodec aac -b:a 256000 /tmp/output.m4a
The .aac extension is used for AAC in the ADTS container, which is not a format that is widely used or supported on the web. The .m4a or .mp4 extension is used for AAC in the MP4 container, which is far more common.
Or, if for some reason you really want the file to be named output.aac but yet want it to contain AAC in MP4, then you can specify the output format instead of having ffmpeg guess the format that you want based on the output file extension:
ffmpeg -i /tmp/input.any -acodec aac -b:a 256000 -f mp4 /tmp/output.aac
But if you use a non-standard extension like that then you may also need to update your web server configuration so that it will report the correct media type.
I was having a similar issue where my fMP4 played fine if used directly as src inside the video tag, but it didn't play when used via MediaSource Extensions. My problem was that my AAC frame in the fMP4 was with the ADTS header and it appears that MSE doesn't like those headers. So, I removed the ADTS header from each AAC frame and it worked fine. (I already had the Audio Specific Config added in my ESDS box, so I could remove the ADTS header without issue.)

Piping out h.264 encoded video through avconv to vlc

My end game is to take read a raw video from a file into avconv, h.264 encode it, and pipe it to VLC. However, I cannot seem to get it to work. Even just piping an already encoded video to VLC does not work. Trying:
avconv -i test.mp4 -f h264 - | vlc -
appears to be encoding a video (the cmd line output looks like it is processing frame by frame), but nothing is displayed to VLC. A similar test with an .avi works fine:
avconv -i test.avi -f avi - | vlc -
Is there something different special piping out h264 encoded video?
Specify the demuxer:
cat test.h264 | vlc --demux h264 -
--demux=<string> Demux module
Demultiplexers are used to separate the "elementary" streams (like
audio and video streams). You can use it if the correct demuxer is
not automatically detected. You should not set this as a global
option unless you really know what you are doing.
VLC command line help

Reconstructing fragmented H.264 stream

I have H.264 stream stored as a file. I am trying to create a MPEG4 file by adding this stream to MDAT BOX. I have created other headers required by MPEG-4 standards. But I am unable to read the MPEG-4 file.
I parsed the H.264 stream and I see that there are multiple I-frames in the file. It seems to me that this is fragmented H.264 stream.
Is there any way in which this fragmented H.264 stream can be combined into a single I-frame?
I have gone through the link Problem to Decode H264 video over RTP with ffmpeg (libavcodec).
I implemented what was mentioned in the link but i am still unable to run the MPEG-4 thus created.
With the above technique, I get the fragmentType = 5. I get the following nalTypes (8, 2, 1, 0, 0, ...). I get the startBit as specified and for the other fragments, I get the 00 (for StartBit|endBit). I do not get the endBit.
When i try using FFMPEG to reconvert the MPEG-4 file that was created, i get the following error: "header damaged". It looks like the reconstruction of IDR frames is not working properly.
Please let me know if the method that I am following has any issues.
The H.264 stream file is around 100KB. When this file is converted to MP4 using FFMPEG, I get around 38KB. Does it mean that FFMPEG is re-encoding the file once again inorder to recreate the MP4 file?
With the above technique that was mentioned in the link, the MP4 that I have created is around 100KB itself.
Please let me know what I am doing that is wrong.
Thanks in advance.
It sounds like you'd like to wrap an H.264 elementary stream in a mp4 container so that you can play it back.
A tool like mp4box (http://gpac.wp.mines-telecom.fr/mp4box/) will enable you to wrap you elementary stream into a mp4 file. For example:
mp4box -add MySourcFile.h264 MyDestFile.mp4

What is the meaning of "Only raw PCM 16 bits mono files with the same frequency as the current voice are supported by the audio tag."?

When trying to use the HTML5 <audio> tag, I get this error:
Only raw PCM 16 bits mono files with the same frequency as the current
voice are supported by the audio tag.
What does this mean?
I understand that Only raw PCM 16 bits mono files with the same frequency as the current voice are supported by the HTML 5 <audio> tag. Your file is in the wrong format - resample it to a single (mono) channel at, say, 44.1KHz (standard), encoded as 16 bit PCM, and it should play fine.
is it a wave file? If not you need a put the pcm in to wav file
PCM is the audio format in wav containers. I also think it can be AIFF in wav files but mostly WAV files contain PCM.
So if the file you are trying to play is not a wav file it might be complaining about that.
Just the extension is wav does not also mean that it is a wav file. The file needs to have wav headers and other stuff that wav would have.