FFMPEG HLS streaming and transcoding on the fly to HTML player - video duration changes while transcoding - html

I am trying to make a video streaming server and watch videos directly from web browser. The idea is to make the server to stream video from remote server, transcode with different audio format in local server, and then instantly stream to the client (this is specific way I need it to function).
This is the FFMPEG code im currently using:
ffmpeg -i "url" -c:v copy -c:a aac -ac 2 -f hls -hls_time 60 -hls_playlist_type event -hls_flags independent_segments out.m3u8
The HLS stream is attached to the HTML player with hls.js and it works. However, the video duration is constantly changing while video is being transcoded. I have tried to change video duration with JS like $('video').duration = 120;with no luck.
How do i make the player to display the video file duration instead of stream current transcoded time?
I am also planning to implement video seeking but i am clueless. The current idea is to send seeking time to the server, terminate ffmpeg, and start from specific time. However, i think the player might get stuck on loading and will not start playing without reloading.

Ffmpeg can’t write segments to the manifest before they are on disk. You will need to wait for ffmpeg to finish If you don’t want the “live like” behavior during media preparation.

Related

Is it possible to send a large video file (larger than 4 Gig) from browser via a webrtc or other and stream it via HLS or other format?

My question is for HTML side of things (via a broswer like chrome or safari)...
**** also the main concern is chrome / safari limitation when sending file over 4 Gig via a
<input type="file" >
Example : https://obsproject.com/ is a streaming software that you install on a desktop to stream a from a camera or from a video file to RTMP endpoint....
but is it doable to to the same (send a pre-recorded .mov) and stream it to a server and that server to "record" it to make it available after the stream... like using webrtc api of the browser and send it to a multipoint control unit (MCU) and record or convert to HLS .m3u8 format...
digging in google made me found this
https://webrtc.github.io/samples/src/content/capture/video-pc/
the the demo don't work :(
••••• I found this too https://github.com/muaz-khan/FileBufferReader with function FileBufferReader() {...
but what about recording (from the user browser to an other peer, but that other peer could be a server that takes this "stream" and re-assemble it and do a .m3u8 HLS format) ?
Delivering a 4 GB file is problematic- any network hiccup during delivery could cause an issue.
OBS sends a video RTMP stream to many services to "live stream" like Twitch, YouTube, etc (I work for api.video, and we sponsor OBS, and can be used to livestream this way).
These services convert the RTMP stream into HLS for you and deliver the streams to your customers.
If you have a 4 GB .mov file - you can upload that to services as well to convert to the HLS format. But - like you said, 4Gb can be problematic. Our workaround has been to use file.slice in JavaSCript to split the big video in to manageable chunks (that are reassembled on the server).
I've written a blog post on how to do this:
https://api.video/blog/tutorials/uploading-large-files-with-javascript
and a live demo (using the api.video backend): https://upload.a.video
Doug

Wowza stream engine no sound in html player

I use the tutorial at
https://www.wowza.com/forums/content.php?43-How-to-set-up-live-video-recording, and by using FlashRTMPPlayer11 I can record webcam streams and also play with no problem.
But, when I copy the recorded video file(wowza_output.mp4) and give the video to a html video player as a source, there is no sound coming. Also I have same issue when I try to open the video in vlc player.
I tried transcoders also and try to convert to sound encoding to aac, but still not working.
Help please...
The audio codec used for a Flash app's live stream recording should be Speex, so your video player should support playing back Speex audio in order to play it correctly. Transcoding to AAC should fix it, but you can't currently transcode a VOD file in Wowza. If you're using a different encoder to re-encode in AAC, like ffmpeg, the command should be:
ffmpeg -i wowza_output.mp4 -c:v copy -c:a libvo_aacenc wowza_output_aac.mp4
When you launch the Flash application, it should prompt you to allow it to use your camera/microphone. Make sure that you enable this. You can set the Flash Player Camera and Mic settings from the Flash Player Settings Manager.
You may also want to test a different browser.
Using Wowza transcoders - and setting their transcoder to convert to AAC audio format should do the trick. Maybe you have miss-configured something.
Don't forget to enable the transcoder, and also be sure that you are looking for the new source file (as you will have two of them now), and not for the old source.
Also, if you are making an API call to record an stream, make sure you change the &streamname=your_stream to the &streamname=your_stream_transcoded_name

HTML5 video player for streams

There are other questions but they don't pretty much answer mine. I have a RPI stream command to send video to 192.168.1.xx:port (in my case I chose 192.168.1.39:8160). If I open a stream decoder like VLC and enter the web URL, the vide works just fine. This is OK, but I'd like to do a web version. How can I do so? Maybe use a common video tag and place that url in the src attribute? Or is there a plugin to insert in my HTML code for it to receive the web stream?
Thanks beforehand.
Note: my command is the following:
raspivid -o - -t 0 -hf -w 800 -h 400 -fps 24 | cvlc -vvv stream:///dev/stdin --sout '#standard{access=http,mux=ts,dst=:5258}' :demux=h264
A video tag may just works in browsers that supports h264. If you'd like to have a wider support, you can use http://mediaelementjs.com/ that provides you a fallback to flash.
I'm used to use HTML5 video streamed over HTTP (partial request) and you have to check whether it's working with your streaming protocol. If not, you may have stream it using HTTP (or another way ?)
You can test that by pasting the URI of the stream into a recent browser and see if the video is read.

Best approach to real time http streaming to HTML5 video client [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Closed 1 year ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'm really stuck trying to understand the best way to stream real time output of ffmpeg to a HTML5 client using node.js, as there are a number of variables at play and I don't have a lot of experience in this space, having spent many hours trying different combinations.
My use case is:
1) IP video camera RTSP H.264 stream is picked up by FFMPEG and remuxed into a mp4 container using the following FFMPEG settings in node, output to STDOUT. This is only run on the initial client connection, so that partial content requests don't try to spawn FFMPEG again.
liveFFMPEG = child_process.spawn("ffmpeg", [
"-i", "rtsp://admin:12345#192.168.1.234:554" , "-vcodec", "copy", "-f",
"mp4", "-reset_timestamps", "1", "-movflags", "frag_keyframe+empty_moov",
"-" // output to stdout
], {detached: false});
2) I use the node http server to capture the STDOUT and stream that back to the client upon a client request. When the client first connects I spawn the above FFMPEG command line then pipe the STDOUT stream to the HTTP response.
liveFFMPEG.stdout.pipe(resp);
I have also used the stream event to write the FFMPEG data to the HTTP response but makes no difference
xliveFFMPEG.stdout.on("data",function(data) {
resp.write(data);
}
I use the following HTTP header (which is also used and working when streaming pre-recorded files)
var total = 999999999 // fake a large file
var partialstart = 0
var partialend = total - 1
if (range !== undefined) {
var parts = range.replace(/bytes=/, "").split("-");
var partialstart = parts[0];
var partialend = parts[1];
}
var start = parseInt(partialstart, 10);
var end = partialend ? parseInt(partialend, 10) : total; // fake a large file if no range reques
var chunksize = (end-start)+1;
resp.writeHead(206, {
'Transfer-Encoding': 'chunked'
, 'Content-Type': 'video/mp4'
, 'Content-Length': chunksize // large size to fake a file
, 'Accept-Ranges': 'bytes ' + start + "-" + end + "/" + total
});
3) The client has to use HTML5 video tags.
I have no problems with streaming playback (using fs.createReadStream with 206 HTTP partial content) to the HTML5 client a video file previously recorded with the above FFMPEG command line (but saved to a file instead of STDOUT), so I know the FFMPEG stream is correct, and I can even correctly see the video live streaming in VLC when connecting to the HTTP node server.
However trying to stream live from FFMPEG via node HTTP seems to be a lot harder as the client will display one frame then stop. I suspect the problem is that I am not setting up the HTTP connection to be compatible with the HTML5 video client. I have tried a variety of things like using HTTP 206 (partial content) and 200 responses, putting the data into a buffer then streaming with no luck, so I need to go back to first principles to ensure I'm setting this up the right way.
Here is my understanding of how this should work, please correct me if I'm wrong:
1) FFMPEG should be setup to fragment the output and use an empty moov (FFMPEG frag_keyframe and empty_moov mov flags). This means the client does not use the moov atom which is typically at the end of the file which isn't relevant when streaming (no end of file), but means no seeking possible which is fine for my use case.
2) Even though I use MP4 fragments and empty MOOV, I still have to use HTTP partial content, as the HTML5 player will wait until the entire stream is downloaded before playing, which with a live stream never ends so is unworkable.
3) I don't understand why piping the STDOUT stream to the HTTP response doesn't work when streaming live yet if I save to a file I can stream this file easily to HTML5 clients using similar code. Maybe it's a timing issue as it takes a second for the FFMPEG spawn to start, connect to the IP camera and send chunks to node, and the node data events are irregular as well. However the bytestream should be exactly the same as saving to a file, and HTTP should be able to cater for delays.
4) When checking the network log from the HTTP client when streaming a MP4 file created by FFMPEG from the camera, I see there are 3 client requests: A general GET request for the video, which the HTTP server returns about 40Kb, then a partial content request with a byte range for the last 10K of the file, then a final request for the bits in the middle not loaded. Maybe the HTML5 client once it receives the first response is asking for the last part of the file to load the MP4 MOOV atom? If this is the case it won't work for streaming as there is no MOOV file and no end of the file.
5) When checking the network log when trying to stream live, I get an aborted initial request with only about 200 bytes received, then a re-request again aborted with 200 bytes and a third request which is only 2K long. I don't understand why the HTML5 client would abort the request as the bytestream is exactly the same as I can successfully use when streaming from a recorded file. It also seems node isn't sending the rest of the FFMPEG stream to the client, yet I can see the FFMPEG data in the .on event routine so it is getting to the FFMPEG node HTTP server.
6) Although I think piping the STDOUT stream to the HTTP response buffer should work, do I have to build an intermediate buffer and stream that will allow the HTTP partial content client requests to properly work like it does when it (successfully) reads a file? I think this is the main reason for my problems however I'm not exactly sure in Node how to best set that up. And I don't know how to handle a client request for the data at the end of the file as there is no end of file.
7) Am I on the wrong track with trying to handle 206 partial content requests, and should this work with normal 200 HTTP responses? HTTP 200 responses works fine for VLC so I suspect the HTML5 video client will only work with partial content requests?
As I'm still learning this stuff its difficult to work through the various layers of this problem (FFMPEG, node, streaming, HTTP, HTML5 video) so any pointers will be greatly appreciated. I have spent hours researching on this site and the net, and I have not come across anyone who has been able to do real time streaming in node but I can't be the first, and I think this should be able to work (somehow!).
EDIT 3: As of IOS 10, HLS will support fragmented mp4 files. The answer
now, is to create fragmented mp4 assets, with a DASH and HLS manifest. > Pretend flash, iOS9 and below and IE 10 and below don't exist.
Everything below this line is out of date. Keeping it here for posterity.
EDIT 2: As people in the comments are pointing out, things change.
Almost all browsers will support AVC/AAC codecs.
iOS still requires HLS. But via adaptors like hls.js you can play
HLS in MSE. The new answer is HLS+hls.js if you need iOS. or just
Fragmented MP4 (i.e. DASH) if you don't
There are many reasons why video and, specifically, live video is very difficult. (Please note that the original question specified that HTML5 video is a requirement, but the asker stated Flash is possible in the comments. So immediately, this question is misleading)
First I will restate: THERE IS NO OFFICIAL SUPPORT FOR LIVE STREAMING OVER HTML5. There are hacks, but your mileage may vary.
EDIT: since I wrote this answer Media Source Extensions have matured,
and are now very close to becoming a viable option. They are supported
on most major browsers. IOS continues to be a hold out.
Next, you need to understand that Video on demand (VOD) and live video are very different. Yes, they are both video, but the problems are different, hence the formats are different. For example, if the clock in your computer runs 1% faster than it should, you will not notice on a VOD. With live video, you will be trying to play video before it happens. If you want to join a a live video stream in progress, you need the data necessary to initialize the decoder, so it must be repeated in the stream, or sent out of band. With VOD, you can read the beginning of the file them seek to whatever point you wish.
Now let's dig in a bit.
Platforms:
iOS
PC
Mac
Android
Codecs:
vp8/9
h.264
thora (vp3)
Common Delivery methods for live video in browsers:
DASH (HTTP)
HLS (HTTP)
flash (RTMP)
flash (HDS)
Common Delivery methods for VOD in browsers:
DASH (HTTP Streaming)
HLS (HTTP Streaming)
flash (RTMP)
flash (HTTP Streaming)
MP4 (HTTP pseudo streaming)
I'm not going to talk about MKV and OOG because I do not know them very well.
html5 video tag:
MP4
webm
ogg
Lets look at which browsers support what formats
Safari:
HLS (iOS and mac only)
h.264
MP4
Firefox
DASH (via MSE but no h.264)
h.264 via Flash only!
VP9
MP4
OGG
Webm
IE
Flash
DASH (via MSE IE 11+ only)
h.264
MP4
Chrome
Flash
DASH (via MSE)
h.264
VP9
MP4
webm
ogg
MP4 cannot be used for live video (NOTE: DASH is a superset of MP4, so don't get confused with that). MP4 is broken into two pieces: moov and mdat. mdat contains the raw audio video data. But it is not indexed, so without the moov, it is useless. The moov contains an index of all data in the mdat. But due to its format, it can not be 'flattened' until the timestamps and size of EVERY frame is known. It may be possible to construct an moov that 'fibs' the frame sizes, but is is very wasteful bandwidth wise.
So if you want to deliver everywhere, we need to find the least common denominator. You will see there is no LCD here without resorting to flash
example:
iOS only supports h.264 video. and it only supports HLS for live.
Firefox does not support h.264 at all, unless you use flash
Flash does not work in iOS
The closest thing to an LCD is using HLS to get your iOS users, and flash for everyone else.
My personal favorite is to encode HLS, then use flash to play HLS for everyone else. You can play HLS in flash via JW player 6, (or write your own HLS to FLV in AS3 like I did)
Soon, the most common way to do this will be HLS on iOS/Mac and DASH via MSE everywhere else (This is what Netflix will be doing soon). But we are still waiting for everyone to upgrade their browsers. You will also likely need a separate DASH/VP9 for Firefox (I know about open264; it sucks. It can't do video in main or high profile. So it is currently useless).
Thanks everyone especially szatmary as this is a complex question and has many layers to it, all which have to be working before you can stream live video. To clarify my original question and HTML5 video use vs flash - my use case has a strong preference for HTML5 because it is generic, easy to implement on the client and the future. Flash is a distant second best so lets stick with HTML5 for this question.
I learnt a lot through this exercise and agree live streaming is much harder than VOD (which works well with HTML5 video). But I did get this to work satisfactorily for my use case and the solution worked out to be very simple, after chasing down more complex options like MSE, flash, elaborate buffering schemes in Node. The problem was that FFMPEG was corrupting the fragmented MP4 and I had to tune the FFMPEG parameters, and the standard node stream pipe redirection over http that I used originally was all that was needed.
In MP4 there is a 'fragmentation' option that breaks the mp4 into much smaller fragments which has its own index and makes the mp4 live streaming option viable. But not possible to seek back into the stream (OK for my use case), and later versions of FFMPEG support fragmentation.
Note timing can be a problem, and with my solution I have a lag of between 2 and 6 seconds caused by a combination of the remuxing (effectively FFMPEG has to receive the live stream, remux it then send it to node for serving over HTTP). Not much can be done about this, however in Chrome the video does try to catch up as much as it can which makes the video a bit jumpy but more current than IE11 (my preferred client).
Rather than explaining how the code works in this post, check out the GIST with comments (the client code isn't included, it is a standard HTML5 video tag with the node http server address). GIST is here: https://gist.github.com/deandob/9240090
I have not been able to find similar examples of this use case, so I hope the above explanation and code helps others, especially as I have learnt so much from this site and still consider myself a beginner!
Although this is the answer to my specific question, I have selected szatmary's answer as the accepted one as it is the most comprehensive.
Take a look at JSMPEG project. There is a great idea implemented there — to decode MPEG in the browser using JavaScript. Bytes from encoder (FFMPEG, for example) can be transfered to browser using WebSockets or Flash, for example. If community will catch up, I think, it will be the best HTML5 live video streaming solution for now.
One way to live-stream a RTSP-based webcam to a HTML5 client (involves re-encoding, so expect quality loss and needs some CPU-power):
Set up an icecast server (could be on the same machine you web server is on or on the machine that receives the RTSP-stream from the cam)
On the machine receiving the stream from the camera, don't use FFMPEG but gstreamer. It is able to receive and decode the RTSP-stream, re-encode it and stream it to the icecast server. Example pipeline (only video, no audio):
gst-launch-1.0 rtspsrc location=rtsp://192.168.1.234:554 user-id=admin user-pw=123456 ! rtph264depay ! avdec_h264 ! vp8enc threads=2 deadline=10000 ! webmmux streamable=true ! shout2send password=pass ip=<IP_OF_ICECAST_SERVER> port=12000 mount=cam.webm
=> You can then use the <video> tag with the URL of the icecast-stream (http://127.0.0.1:12000/cam.webm) and it will work in every browser and device that supports webm
I wrote an HTML5 video player around broadway h264 codec (emscripten) that can play live (no delay) h264 video on all browsers (desktop, iOS, ...).
Video stream is sent through websocket to the client, decoded frame per frame and displayed in a canva (using webgl for acceleration)
Check out https://github.com/131/h264-live-player on github.
This is a very common misconception. There is no live HTML5 video support (except for HLS on iOS and Mac Safari). You may be able to 'hack' it using a webm container, but I would not expect that to be universally supported. What you are looking for is included in the Media Source Extensions, where you can feed the fragments to the browser one at a time. but you will need to write some client side javascript.
Take a look at this solution.
As I know, Flashphoner allows to play Live audio+video stream in the pure HTML5 page.
They use MPEG1 and G.711 codecs for playback.
The hack is rendering decoded video to HTML5 canvas element and playing decoded audio via HTML5 audio context.
How about use jpeg solution, just let server distribute jpeg one by one to browser, then use canvas element to draw these jpegs?
http://thejackalofjavascript.com/rpi-live-streaming/

Real Time Streaming to HTML5 (with out webrtc) just using video tag

I would like to wrap real time encoded data to webm or ogv and send it to an html5 browser.
Can webm or ogv do this,
Mp4 can not do this due to its MDAT atoms. (one can not wrap h264 and mp3 in real time and wrap it and send it to the client)
Say I am feeding the input from my webcam and audio from my built in mic.
Fragmented mp4 can handle this but its an hassle to find libs to do that).
I need to do this cuz I do not want to send audio and video separably.
If I did send it separably, sending audio over audio tag and video over video>(audio and video are demuxed and sent)
Can I sync them on client browser with javascript. I saw some examples but not sure yet.
I did this with ffmpeg/ffserver running on Ubuntu as follows for webm (mp4 and ogg are a bit easier, and should work in a similar manner from the same server, but you should use all 3 formats for compatibility across browsers).
First, build ffmpeg from source to include the libvpx drivers (even if your using a version that has it, you need the newest ones (as of this month) to stream webm because they just did add the functionality to include global headers). I did this on an Ubuntu server and desktop, and this guide showed me how - instructions for other OSes can be found here.
Once you've gotten the appropriate version of ffmpeg/ffserver you can set them up for streaming, in my case this was done as follows.
On the video capture device:
ffmpeg -f video4linux2 -standard ntsc -i /dev/video0 http://<server_ip>:8090/0.ffm
The "-f video4linux2 -standard ntsc -i /dev/video0" portion of that may change depending on your input source (mine is for a video capture card).
Relevant ffserver.conf excerpt:
Port 8090
#BindAddress <server_ip>
MaxHTTPConnections 2000
MAXClients 100
MaxBandwidth 1000000
CustomLog /var/log/ffserver
NoDaemon
<Feed 0.ffm>
File /tmp/0.ffm
FileMaxSize 5M
ACL allow <feeder_ip>
</Feed>
<Feed 0_webm.ffm>
File /tmp/0_webm.ffm
FileMaxSize 5M
ACL allow localhost
</Feed>
<Stream 0.mpg>
Feed 0.ffm
Format mpeg1video
NoAudio
VideoFrameRate 25
VideoBitRate 256
VideoSize cif
VideoBufferSize 40
VideoGopSize 12
</Stream>
<Stream 0.webm>
Feed 0_webm.ffm
Format webm
NoAudio
VideoCodec libvpx
VideoSize 320x240
VideoFrameRate 24
AVOptionVideo flags +global_header
AVOptionVideo cpu-used 0
AVOptionVideo qmin 1
AVOptionVideo qmax 31
AVOptionVideo quality good
PreRoll 0
StartSendOnKey
VideoBitRate 500K
</Stream>
<Stream index.html>
Format status
ACL allow <client_low_ip> <client_high_ip>
</Stream>
Note this is configured for a server at feeder_ip to execute the aforementioned ffmpeg command, and for the server at server_ip so server to client_low_ip through client_high_ip while handling the mpeg to webm conversation on server_ip (continued below).
This ffmpeg command is executed on the machine previously referred to as server_ip (it handles the actual mpeg --> webm conversion and feeds it back into the ffserver on a different feed):
ffmpeg -i http://<server_ip>:8090/0.mpg -vcodec libvpx http://localhost:8090/0_webm.ffm
Once these have all been started up (first the ffserver, then the feeder_ip ffmpeg process then then the server_ip ffmpeg process) you should be able to access the live stream at http://:8090/0.webm and check the status at http://:8090/
Hope this helps.
Evren,
Since you have asked this question initially, the Media Source Extensions
https://www.w3.org/TR/media-source/ have matured enough to be able to play very short (30ms) ISO-BMFF video/mp4 segments with just a little buffering.
Refer to
HTML5 live streaming
So your statement
(one can not wrap h264 and mp3 in real time and wrap it and send it to the client)
is out of date now. Yes you can do it with h264 + AAC.
There are several implementations out there; take a look at Unreal Media Server.
From Unreal Media Server FAQ: http://umediaserver.net/umediaserver/faq.html
How is Unreal HTML5 live streaming different from MPEG-DASH?
Unlike MPEG-DASH, Unreal Media Server uses a WebSocket protocol for live streaming to HTML5 MSE element in web browsers. This is much more efficient than fetching segments via HTTP requests per MPEG-DASH. Also, Unreal Media Server sends segments of minimal duration, as low as 30 ms. That allows for low, sub-second latency streaming, while MPEG-DASH, like other HTTP chunk-based live streaming protocols, cannot provide low latency live streaming.
Their demos webpage has a live HTML5 feed from RTSP camera:
http://umediaserver.net/umediaserver/demos.html
Notice that the latency in HTML5 player is comparable to that in Flash player.
Not 100% sure you can do this. HTML5 has not ratified any live streaming mechanism. You could use websockets and send data in real time to the browser to do this. But you have to write the parsing logic yourself and I do not know how you will feed the data as it arrives to the player.
As for video and audio tag: Video tag can play container files that have both audio and video. So wrap your content in a container that is compatible. If you modify your browser to write your live streaming to this video file as the live content keeps coming in and stream out that data for every byte requested by the browser this could be done. But it is definitely non trivial.