I'm currently developing an application which will enable visualizing images from different sources (mostly IP cameras) in browser (in a HTML5 video element). The UI will allow for matrix view so, normally 16 or more cameras will be displayed at the same time.
From cameras I get MJPEG streams or JPEG images (which I "convert" to MJPEG streams). So, for a camera, I have an MJPEG stream which I set as input for ffmpeg. I instruct ffmpeg to convert this to MP4 & H.264, and expose the output as a tcp stream, like this:
ffmpeg -f mjpeg -i "http://localhost/video.mjpg" -f mp4 -vcodec libx264 "tcp://127.0.0.1:5001?listen"
This works just fine on localhost, I get the stream displayed in the web page, at best quality.
But this has to work in various network conditions. I played a bit with chrome throttling settings, and noticed that if the network speed is just a bit below the required speed (given by the current compression settings I use in ffmpeg), the things start to go wrong: from stream start being delayed (so, no longer a live stream), up to complete freeze of 'live' image in browser.
What I need is an "adaptive" way to do the compression, in relation with current network speed.
My questions are:
is ffmpeg able to handle this, to adapt to network conditions - automatically reduce compression quality when speed is low; so the image in browser will be lower quality, but live (which is most important in my case)
if not, is there a way to workaround this?
is there a way to detect the network bottleneck? (and then restart ffmpeg with lower compression parameters; this is not a dynamic adaptive streaming, but better than nothing)
Thank you in advance!
Your solution do not work out of the local network. Why? because you must to use HTTP. For that the best solution is use HLS or DASH.
HLS
ffmpeg -i input.mp4 -s 640x360 -start_number 0 -hls_time 10 -hls_list_size 0 -f hls index.m3u8
To generate adaptive streams you have to create an second level index. I do not explain here becaouse it is really clear in Apple doumentation: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40008332-CH1-SW1
and in the standard: https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-18
DASH
At the moment the FFMPEG not support Dash encoding. You can segment with FFMPEG ( [https://www.ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment][1] ) but i recomend that combine the FFMPEG and MP4Box. FFMPEG to transcode your live video and the MP4Box to segment and create the index .mpd.
MP4Box is a part of GPAC ( [http://gpac.wp.mines-telecom.fr/][2] )
An example can be (using h264) - If you need vp8 (webm, use -vcodec libvpx and -f webm or -f ts ):
ffmpeg -threads 4 -f v4l2 -i /dev/video0 -acodec libfaac -ar 44100 -ab 128k -ac 2 -vcodec libx264 -r 30 -s 1280x720 -f mp4 -y "$movie" > temp1.mp4 && MP4Box -dash 10000 -frag 1000 -rap "$movie"
Related
I try to stream my RTSP-IP-Camera on a website. I use the Nginx webserver. My source in the html-code is:
<source src=rtmp://ip-address:1935/live/ type="application/x-mpegURL" />
To convert the rtsp stream i use this ffmpeg code:
ffmpeg -rtsp_transport tcp -i rtsp://user:password#ip-camera:554/h264Preview_01_main -vcodec copy -acodec copy -f mp4 -y rtmp://ip-address:1935/live/
I get the error message "muxer does not support non seekable output
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument"
I also tried to convert the rtsp-camera into a mp4 file and then access the file as source in my html code, but i couldnt read the file while ffmpeg wrote in it.
If you need more information let me know.
Thank you and have a nice day.
First, you should use format flv, not mp4. More, you should specify a stream(StreamKey for obs) like livestream:
ffmpeg -rtsp_transport tcp -i rtsp://user:password#ip-camera:554/h264Preview_01_main -c copy \
-f flv -y rtmp://ip-address:1935/live/livestream
Then you covert RTSP to RTMP, and you can use server to covert the RTMP to HLS, like what you did.
The latency of HLS is large, about 5~10s, if you want to get lower latency, please use HTTP-FLV or WebRTC, see link here
I have been trying to display my IP Cameras output onto a webpage to be viewed on a iThing (ipad or iphone)
Im diplaying the output below in a video tag like below
<video id='hls-example' class="video-js vjs-default-skin" width="400" height="300" controls>
<source type="application/x-mpegURL" src="http://127.0.0.1/wordpress/prog_index.m3u8">
</video>
Im using ffmpeg to mux/convert (I may have my terminology wrong) the cameras http stream (not RTSP stream).
Ive tried multiple commands below and some commands work on a PC/Chrome but none of them work on a ipad/safari or chrome.
All the files are being generated in the correct locations on the webserver to allow them to be diplayed
ffmpeg -i http://username:password#192.168.102.92/ISAPI/Streaming/channels/102/httpPreview -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 640x480 -c:v libx264 -b:v 1536k -c:a copy -hls_time 6 -hls_playlist_type vod -hls_segment_type fmp4 -hls_segment_filename "fileSequence%d.m4s" -hls_wrap 3 prog_index.m3u8
ffmpeg -i http://username:password#192.168.102.92/ISAPI/Streaming/channels/102/httpPreview -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 640x480 -c:v libx264 -b:v 1536k -c:a copy -hls_time 6 -hls_playlist_type vod -hls_segment_type fmp4 -hls_segment_filename "fileSequence%d.m4s" -hls_list_size 10 prog_index.m3u8
ffmpeg -i http://username:password#192.168.102.92/ISAPI/Streaming/channels/102/httpPreview -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 640x480 -b:v 1536k -c:a copy -hls_time 6 -hls_segment_type fmp4 -hls_segment_filename "fileSequence%d.m4s" -hls_list_size 10 prog_index.m3u8
ffmpeg -i http://username:password#192.168.102.92/ISAPI/Streaming/channels/102/httpPreview -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 640x480 -b:v 1536k -c:a copy -hls_time 3 -hls_flags delete_segments -hls_segment_type fmp4 -hls_segment_filename "fileSequence%d.m4s" prog_index.m3u8
Can someone point out where Im going wrong, I think its the FFMPEG cmd?
The first and second command might not work because of the -hls_playlist_type vod parameter.
VOD is made for a static file. Since you have a IP camera LiveStream, this might cause problems. If you want the full history (start the stream at a specific point and keep the whole history until the encoding stops) you should use EVENT instead. If you just want a livestream, remove the parameter.
Second, all commands copy the audio stream. Since you already do a video encoding, the audio encoding doesn't take much more CPU load. So I recommend also to re-encode the audio. This allows FFmpeg to perfectly create the HLS segments. -c:a aac -b:a 128k -ac 2 would be a good start for this.
Apple also provides tools (mediastreamvalidator and hlsreport) that verifies your HLS stream (you need a mac to run this tool). More details how to use them: https://www.martin-riedl.de/2018/09/09/hls-stream-validation/
I need to simultaneously stream/broadcast (over rtmp) and save video (with audio) from my USB webcam. The webcam is Logitech c920 which have hardware h.264 encoder.
I don't want to reencode the media, so I'm using the -c:v copy option.
The whole script looks like below:
#! /bin/bash
SOURCEV="/dev/video0"
SOURCEA="hw:1"
FILE_TO_SAVE="Archive/file_to_save.mp4"
YOUTUBE_URL="rtmp://x.rtmp.youtube.com/live2"
KEY="my-secret-key"
avconv -f alsa -ac 2 -r 44100 -i $SOURCEA \
-s 1920x1080 -r 24 -c:v h264 -i "$SOURCEV" \
-ar "44100" -r:v 24 -c:a aac -c:v copy -s 1920x1080 -f mp4 "$FILE_TO_SAVE" \
-g $FPS*4 -ar "44100" -b:a "128k" -ac 2 -r 24 -c:a aac -c:v copy -s 1920x1080 -f flv "$YOUTUBE_URL/$KEY"
This method "works" - it means' it can stream content and save it to disk, but the problem with this method is that file video relies on the stream. For example if the Internet connection is too slow, the saved file will have low FPS. If the Internet connection is interrupted the "recording" of video file is stopped.
Can anyone help me with making this two streams independent?
The whole things is happening on raspberrypi 3 so computing power is highly limited.
Try to install nginx + nginx-rtmp locally and stream to it. In options of server enable saving to local files. And launch other script to re-stream to youtube.
I have a problem, When i upload a video to my website And play the video in my HTML5 player(video.js) They are lagging, But it's weird because not all mp4 files are lagging on the site only some videos, But when i download them from my server and play them on my computer it's playing normal.
Why are some video's lagging? Does someone have a explanation for it?
If the problem is bandwidth, then depending on format, source bit rate, framesize etc you'll want to re-encode to a more optimal size for your intended purpose
ffmpeg -i "my.mp4" -f mp4 -vcodec mpeg4 -b 512k -r 30 -s 640x360 -acodec libfaac -ar 32000 -ab 128k -ac 2 -threads 8 -movflags faststart "my_reduced.mp4"
-b = video bit rate (lower value = smaller size of the video file... however it reduces the quality of video.
-s = resolution of the video, optimize it to match desired output (but remember to maintain the correct aspect ration)
-movflags = relocates metadata to the start of the file reducing buffering time
I am having an issue converting some H264 encoded video to MJPEG format (contained in AVI), the video is a single video track and has no audio.
I have tried using avconv with the following command avconv -i test_5.mp4 -vcodec mjpeg -q 1 -r 30 out.avi, which results in a video which plays back far too fast.
I have also tried using Handbrake, however this produced an unplayable video.
The H264 video is captured with the raspivid utility on the Raspberry Pi at 30 FPS.
I think you can get it done using ffmpeg instead of avconv (libav), you must download a ffmpeg binary package from here or compile yourself from sources, then check out this howto