I need to simultaneously stream/broadcast (over rtmp) and save video (with audio) from my USB webcam. The webcam is Logitech c920 which have hardware h.264 encoder.
I don't want to reencode the media, so I'm using the -c:v copy option.
The whole script looks like below:
#! /bin/bash
SOURCEV="/dev/video0"
SOURCEA="hw:1"
FILE_TO_SAVE="Archive/file_to_save.mp4"
YOUTUBE_URL="rtmp://x.rtmp.youtube.com/live2"
KEY="my-secret-key"
avconv -f alsa -ac 2 -r 44100 -i $SOURCEA \
-s 1920x1080 -r 24 -c:v h264 -i "$SOURCEV" \
-ar "44100" -r:v 24 -c:a aac -c:v copy -s 1920x1080 -f mp4 "$FILE_TO_SAVE" \
-g $FPS*4 -ar "44100" -b:a "128k" -ac 2 -r 24 -c:a aac -c:v copy -s 1920x1080 -f flv "$YOUTUBE_URL/$KEY"
This method "works" - it means' it can stream content and save it to disk, but the problem with this method is that file video relies on the stream. For example if the Internet connection is too slow, the saved file will have low FPS. If the Internet connection is interrupted the "recording" of video file is stopped.
Can anyone help me with making this two streams independent?
The whole things is happening on raspberrypi 3 so computing power is highly limited.
Try to install nginx + nginx-rtmp locally and stream to it. In options of server enable saving to local files. And launch other script to re-stream to youtube.
Related
I want to feed a virtual webcam device from an application window (under Linux/Xorg). I have so far just maximised the window and then used ffmpeg to grab the whole screen like this:
ffmpeg \
-f x11grab -framerate 15 -video_size 1280x1024 -i :0+0,0 \
-f v4l2 -vcodec rawvideo -pix_fmt yuv420p /dev/video6
where /dev/video6 is my v4l2loopback device. This works and I can use the virtual camera in video calls in chrome. This also indicates that the v4l2loopback module is correctly loaded into the kernel.
Unfortunately, it seems that ffmpeg can only read the whole screen, but not an application window. gstreamer on the other hand can. Playing around with gst-launch-1.0, I was hoping that I could get away with something like this:
gst-launch-1.0 ximagesrc xid=XID_OF_MY_WINDOW \
! "video/x-raw" \
! v4l2sink device=/dev/video6
However, that complains that Device '/dev/video6' is not an output device.
Given that ffmpeg seems happy to write to /dev/video6 I also tried piping the gst output to ffmpeg like this:
gst-launch-1.0 ximagesrc xid=XID_OF_MY_WINDOW \
! "video/x-raw" \
! filesink location=/dev/stdout \
| ffmpeg -i - -codec copy -f v4l2 -vcodec rawvideo -pix_fmt yuv420p /dev/video6
But then ffmpeg complains about Invalid data found when processing input.
This is running inside an xvfb headless environment, so mouse interactions will not work. This rules out obs as far as I can see.
I'm adding the chrome tag, because I see that chrome in principle would also provide a virtual camera via the --use-fake-device-for-media-stream switch. However, it seems that this switch only supports a static file rather than a stream.
Although I don't see why, it might be relevant that the other "application window" window is simply a second browser window. So the setup is google meet (or similar) in one browser window and the virtual camera gets fed vrom a second browser window.
You may try adding identity before v4l2sink:
# Better restart kernel module
sudo rmmod v4l2loopback
sudo modprobe v4l2loopback <your_options>
# Got window id from xwininfo
gst-launch-1.0 ximagesrc xid=0x3000010 ! videoconvert ! video/x-raw,format=YUY2 ! identity drop-allocation=1 ! v4l2sink device=/dev/video6
You should be able to display with:
gst-launch-1.0 v4l2src device=/dev/video6 ! videoconvert ! xvimagesink
No idea for your case, but for some browser on some targets/os/versions you may have to set exclusive_caps=1 into options when loading v4l2loopback kernel module.
Also note that this may not support any source window resizing.
I have been trying to display my IP Cameras output onto a webpage to be viewed on a iThing (ipad or iphone)
Im diplaying the output below in a video tag like below
<video id='hls-example' class="video-js vjs-default-skin" width="400" height="300" controls>
<source type="application/x-mpegURL" src="http://127.0.0.1/wordpress/prog_index.m3u8">
</video>
Im using ffmpeg to mux/convert (I may have my terminology wrong) the cameras http stream (not RTSP stream).
Ive tried multiple commands below and some commands work on a PC/Chrome but none of them work on a ipad/safari or chrome.
All the files are being generated in the correct locations on the webserver to allow them to be diplayed
ffmpeg -i http://username:password#192.168.102.92/ISAPI/Streaming/channels/102/httpPreview -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 640x480 -c:v libx264 -b:v 1536k -c:a copy -hls_time 6 -hls_playlist_type vod -hls_segment_type fmp4 -hls_segment_filename "fileSequence%d.m4s" -hls_wrap 3 prog_index.m3u8
ffmpeg -i http://username:password#192.168.102.92/ISAPI/Streaming/channels/102/httpPreview -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 640x480 -c:v libx264 -b:v 1536k -c:a copy -hls_time 6 -hls_playlist_type vod -hls_segment_type fmp4 -hls_segment_filename "fileSequence%d.m4s" -hls_list_size 10 prog_index.m3u8
ffmpeg -i http://username:password#192.168.102.92/ISAPI/Streaming/channels/102/httpPreview -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 640x480 -b:v 1536k -c:a copy -hls_time 6 -hls_segment_type fmp4 -hls_segment_filename "fileSequence%d.m4s" -hls_list_size 10 prog_index.m3u8
ffmpeg -i http://username:password#192.168.102.92/ISAPI/Streaming/channels/102/httpPreview -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 640x480 -b:v 1536k -c:a copy -hls_time 3 -hls_flags delete_segments -hls_segment_type fmp4 -hls_segment_filename "fileSequence%d.m4s" prog_index.m3u8
Can someone point out where Im going wrong, I think its the FFMPEG cmd?
The first and second command might not work because of the -hls_playlist_type vod parameter.
VOD is made for a static file. Since you have a IP camera LiveStream, this might cause problems. If you want the full history (start the stream at a specific point and keep the whole history until the encoding stops) you should use EVENT instead. If you just want a livestream, remove the parameter.
Second, all commands copy the audio stream. Since you already do a video encoding, the audio encoding doesn't take much more CPU load. So I recommend also to re-encode the audio. This allows FFmpeg to perfectly create the HLS segments. -c:a aac -b:a 128k -ac 2 would be a good start for this.
Apple also provides tools (mediastreamvalidator and hlsreport) that verifies your HLS stream (you need a mac to run this tool). More details how to use them: https://www.martin-riedl.de/2018/09/09/hls-stream-validation/
I'm currently developing an application which will enable visualizing images from different sources (mostly IP cameras) in browser (in a HTML5 video element). The UI will allow for matrix view so, normally 16 or more cameras will be displayed at the same time.
From cameras I get MJPEG streams or JPEG images (which I "convert" to MJPEG streams). So, for a camera, I have an MJPEG stream which I set as input for ffmpeg. I instruct ffmpeg to convert this to MP4 & H.264, and expose the output as a tcp stream, like this:
ffmpeg -f mjpeg -i "http://localhost/video.mjpg" -f mp4 -vcodec libx264 "tcp://127.0.0.1:5001?listen"
This works just fine on localhost, I get the stream displayed in the web page, at best quality.
But this has to work in various network conditions. I played a bit with chrome throttling settings, and noticed that if the network speed is just a bit below the required speed (given by the current compression settings I use in ffmpeg), the things start to go wrong: from stream start being delayed (so, no longer a live stream), up to complete freeze of 'live' image in browser.
What I need is an "adaptive" way to do the compression, in relation with current network speed.
My questions are:
is ffmpeg able to handle this, to adapt to network conditions - automatically reduce compression quality when speed is low; so the image in browser will be lower quality, but live (which is most important in my case)
if not, is there a way to workaround this?
is there a way to detect the network bottleneck? (and then restart ffmpeg with lower compression parameters; this is not a dynamic adaptive streaming, but better than nothing)
Thank you in advance!
Your solution do not work out of the local network. Why? because you must to use HTTP. For that the best solution is use HLS or DASH.
HLS
ffmpeg -i input.mp4 -s 640x360 -start_number 0 -hls_time 10 -hls_list_size 0 -f hls index.m3u8
To generate adaptive streams you have to create an second level index. I do not explain here becaouse it is really clear in Apple doumentation: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40008332-CH1-SW1
and in the standard: https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-18
DASH
At the moment the FFMPEG not support Dash encoding. You can segment with FFMPEG ( [https://www.ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment][1] ) but i recomend that combine the FFMPEG and MP4Box. FFMPEG to transcode your live video and the MP4Box to segment and create the index .mpd.
MP4Box is a part of GPAC ( [http://gpac.wp.mines-telecom.fr/][2] )
An example can be (using h264) - If you need vp8 (webm, use -vcodec libvpx and -f webm or -f ts ):
ffmpeg -threads 4 -f v4l2 -i /dev/video0 -acodec libfaac -ar 44100 -ab 128k -ac 2 -vcodec libx264 -r 30 -s 1280x720 -f mp4 -y "$movie" > temp1.mp4 && MP4Box -dash 10000 -frag 1000 -rap "$movie"
I'm trying to record video using ffmpeg and then play it back on a player using MSE. Here's the script I'm using:
ffmpeg -i /dev/video0 -c:v libx264 -profile:v baseline -level:v 13 -g 250 -r 25 -keyint_min 250 -strict experimental -pix_fmt yuv420p -movflags frag_keyframe+empty_moov -b:a 96k sintel.mp4
This works except for the fact that there is an mfra box at the end of the video file, which I believe is not supported by MSE. How can I remove this mfra box?
Change your movflags to:
-movflags empty_moov+default_base_moof
and if you want it to also work on Chrome, use:
-movflags empty_moov+default_base_moof+frag_keyframe
I have a problem, When i upload a video to my website And play the video in my HTML5 player(video.js) They are lagging, But it's weird because not all mp4 files are lagging on the site only some videos, But when i download them from my server and play them on my computer it's playing normal.
Why are some video's lagging? Does someone have a explanation for it?
If the problem is bandwidth, then depending on format, source bit rate, framesize etc you'll want to re-encode to a more optimal size for your intended purpose
ffmpeg -i "my.mp4" -f mp4 -vcodec mpeg4 -b 512k -r 30 -s 640x360 -acodec libfaac -ar 32000 -ab 128k -ac 2 -threads 8 -movflags faststart "my_reduced.mp4"
-b = video bit rate (lower value = smaller size of the video file... however it reduces the quality of video.
-s = resolution of the video, optimize it to match desired output (but remember to maintain the correct aspect ration)
-movflags = relocates metadata to the start of the file reducing buffering time