I want to feed a virtual webcam device from an application window (under Linux/Xorg). I have so far just maximised the window and then used ffmpeg to grab the whole screen like this:
ffmpeg \
-f x11grab -framerate 15 -video_size 1280x1024 -i :0+0,0 \
-f v4l2 -vcodec rawvideo -pix_fmt yuv420p /dev/video6
where /dev/video6 is my v4l2loopback device. This works and I can use the virtual camera in video calls in chrome. This also indicates that the v4l2loopback module is correctly loaded into the kernel.
Unfortunately, it seems that ffmpeg can only read the whole screen, but not an application window. gstreamer on the other hand can. Playing around with gst-launch-1.0, I was hoping that I could get away with something like this:
gst-launch-1.0 ximagesrc xid=XID_OF_MY_WINDOW \
! "video/x-raw" \
! v4l2sink device=/dev/video6
However, that complains that Device '/dev/video6' is not an output device.
Given that ffmpeg seems happy to write to /dev/video6 I also tried piping the gst output to ffmpeg like this:
gst-launch-1.0 ximagesrc xid=XID_OF_MY_WINDOW \
! "video/x-raw" \
! filesink location=/dev/stdout \
| ffmpeg -i - -codec copy -f v4l2 -vcodec rawvideo -pix_fmt yuv420p /dev/video6
But then ffmpeg complains about Invalid data found when processing input.
This is running inside an xvfb headless environment, so mouse interactions will not work. This rules out obs as far as I can see.
I'm adding the chrome tag, because I see that chrome in principle would also provide a virtual camera via the --use-fake-device-for-media-stream switch. However, it seems that this switch only supports a static file rather than a stream.
Although I don't see why, it might be relevant that the other "application window" window is simply a second browser window. So the setup is google meet (or similar) in one browser window and the virtual camera gets fed vrom a second browser window.
You may try adding identity before v4l2sink:
# Better restart kernel module
sudo rmmod v4l2loopback
sudo modprobe v4l2loopback <your_options>
# Got window id from xwininfo
gst-launch-1.0 ximagesrc xid=0x3000010 ! videoconvert ! video/x-raw,format=YUY2 ! identity drop-allocation=1 ! v4l2sink device=/dev/video6
You should be able to display with:
gst-launch-1.0 v4l2src device=/dev/video6 ! videoconvert ! xvimagesink
No idea for your case, but for some browser on some targets/os/versions you may have to set exclusive_caps=1 into options when loading v4l2loopback kernel module.
Also note that this may not support any source window resizing.
Related
I have a shell script that runs chromium-browser headless, using it to print html into a pdf file.
$ chromium-browser --headless --disable-gpu \
--print-to-pdf=/home/www-data/build/out.pdf \
/home/www-data/build/out.html
If there are any errors with the page (anything from Javascript errors to unavailable resources), I would like my shell script to behave accordingly. But how do I even tell if such errors occur?
Below is a screenshot from Chromium, where an environment variable was not set, and therefore the path for the CSS files is incorrect.
I have tried in various ways to get this information out of chrome headless, but without luck.
My best suggestion is to look for it in the log in userdata, but even with the highest logging level, I don't see that there is anything about not being able to log the resources. All I can find about loading those resources is shown below:
$ rm /home/www-data/userdata -rf
$ chromium-browser --headless --disable-gpu \
--enable-logging
--v=4 \
--user-data-dir=/home/www-data/userdata \
--print-to-pdf=/home/www-data/build/out.pdf \
/home/www-data/build/out.html
$ grep css userdata -r
userdata/Default/chrome_debug.log:[0921/163104.096612:VERBOSE1:file_url_loader_factory.cc(451)] FileURLLoader::Start: file:///css/bootstrap.css
userdata/Default/chrome_debug.log:[0921/163104.096746:VERBOSE1:file_url_loader_factory.cc(451)] FileURLLoader::Start: file:///css/email.css
userdata/Default/chrome_debug.log:[0921/163104.096873:VERBOSE1:file_url_loader_factory.cc(451)] FileURLLoader::Start: file:///css/print_common.css
userdata/Default/chrome_debug.log:[0921/163104.097021:VERBOSE1:file_url_loader_factory.cc(451)] FileURLLoader::Start: file:///css/print_A4P.css
I'm trying to run end-to-end testing in Chrome for a product that requires a webcam feed halfway through to operate. From what I understand this means providing a fake webcam video to Chrome using the --use-file-for-fake-video-capture="/path/to/video.y4m" command line argument. It will then use that as a webcam video.
However, no matter what y4m file I provide, I get the following error from Chrome running under these conditions:
DOMException: Could not start video source
{
code: 0,
message: "Could not start video source",
name: "NotReadableError"
}
Notably I can provide an audio file just fine using --use-file-for-fake-audio-capture and Chrome will work with it well. The video has been my sticking point.
This error comes out of the following straightforward mediaDevices request:
navigator.mediaDevices.getUserMedia({ video: true, audio: true })
.then(data => {
// do stuff
})
.catch(err => {
// oh no!
});
(This always hits the “oh no!” branch when a video file is provided.)
What I've tried so far
I've been running Chrome with the following command line arguments (newlines added for readability), and I'm using a Mac hence the open command:
open -a "Google Chrome" --args
--disable-gpu
--use-fake-device-for-media-stream
--use-file-for-fake-video-capture="~/Documents/mock/webcam.y4m"
--use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"
webcam.y4m and microphone.wav were generated from a video file I recorded.
I first recorded a twenty-second mp4 video using my browser's MediaRecorder, downloaded the result, and converted it using the following command line commands:
ffmpeg -y -i original.mp4 -f wav -vn microphone.wav
ffmpeg -y -i original.mp4 webcam.y4m
When this didn't work, I tried the same using a twenty-second movie file I recorded in Quicktime:
ffmpeg -y -i original.mov -f wav -vn microphone.wav
ffmpeg -y -i original.mov webcam.y4m
When that also failed, I went straight to the Chromium file that explains fake video capture, went to the example y4m file list it provided, and downloaded the grandma file and provided that as a command line argument to Chrome instead:
open -a "Google Chrome" --args
--disable-gpu
--use-fake-device-for-media-stream
--use-file-for-fake-video-capture="~/Documents/mock/grandma_qcif.y4m"
--use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"
Chrome provides me with the exact same error in all of these situations.
The only time Chrome doesn't error out with that mediaDevices request is when I omit the video completely:
open -a "Google Chrome" --args
--disable-gpu
--use-fake-device-for-media-stream
--use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"
Accounting for C420mpeg2
TestRTC suggests Chrome will “crash” if I give it a C420mpeg2 file, and recommends that simply replacing the metadata fixes the issue. Indeed the video file I generate from ffmpeg gives me the following header:
YUV4MPEG2 W1280 H720 F30:1 Ip A1:1 C420mpeg2 XYSCSS=420MPEG2
Chrome doesn't actually crash when run with this file, I just get the error above. If I edit the video file to the following header though per TestRTC's recommendations I get the same situation:
YUV4MPEG2 W1280 H720 F30:1 Ip A1:1 C420 XYSCSS=420MPEG2
The video file still gives me the above error in these conditions.
What can/should I do?
How should I be providing a video file to Chrome for this command line argument?
How should I be recording or creating the video file?
How should I convert it to y4m?
After reading the link you provided I noticed that we can also provide an mjpeg.
Depending on what your test requirements - this may be sufficient for you. As a terminal command with ffmpeg installed:
ffmpeg -i oldfile.mp4 newfile.mjpeg
then I tested by running Google Chrome from the terminal using:
google-chrome --use-fake-device-for-media-stream --use-file-for-fake-video-capture=newfile.mjpeg
After navigating to Tracking JS I could see the video being played back.
I hope that works for you!
If someone ever needs to mock a video dynamically, this is what I've used (forked from here)
await page.evaluate(() => {
const video = document.createElement("video");
video.setAttribute('id', 'video-mock');
video.setAttribute("src", 'https://woolyss.com/f/spring-vp9-vorbis.webm');
video.setAttribute("crossorigin", "anonymous");
video.setAttribute("controls", "");
video.oncanplay = () => {
const stream = video.captureStream();
navigator.mediaDevices.getUserMedia = () => Promise.resolve(stream);
};
document.querySelector("body").appendChild(video);
});
the key is to return Promise.resolve(stream)
oncanplay is better than onplay because it is triggered after the video is playable
This flags are still necessary:
'--use-fake-ui-for-media-stream',
'--use-fake-device-for-media-stream',
In the end, with this script different camera mocks are possible for every page - especially useful when using browserless!
Mocked/Fake Raw Video (2021)
Use y4m if you want raw frames without Chrome having to run a decoder:
ffmpeg -i original.avi -pix_fmt yuv420p video-for-chrome.y4m
Then, start Chrome:
chrome.exe --use-fake-device-for-media-stream --use-file-for-fake-video-capture=video-for-chrome.y4m
Note: There is no longer any reason to have to modify your y4m file's header. Chrome has since been fixed.
This method uses less CPU, but will take up a good deal of hard drive space for the raw video. Keep your video file short. Chrome will loop it.
How to mock webcam on Chrome for Windows
(Tested with Windows 10 Home Build 19043.1526 and Chrome Version 98.0.4758.102 (Official Build) (64-bit))
Install ffmpeg, then run the following command in your shell to convert your mp4 to mpjeg:
./ffmpeg.exe -i originalVideo.mp4 output.mjpeg
Alternatively, you could also create a y4m video from a png image (thanks #LGenzelis):
./ffmpeg.exe -loop 1 -i myStaticImage.png -pix_fmt yuv420p -t 0.05 output.y4m
Close all Chrome instances, then run the following command in your shell:
"C:\Program Files\Google\Chrome\Application\chrome.exe" --use-fake-device-for-media-stream --use-file-for-fake-video-capture="C:/absolute/path/to/output.mjpeg"
Then test it on a website like https://webcamtests.com/
Troubleshooting
Chrome still showing stream from real camera
Make sure there are no other Chrome instances running before launching it with those arguments.
Camera not found
Make sure you're providing the absolute path to your video in --use-file-for-fake-video-capture (e.g.: "C:/absolute/path/to/output.mjpeg" instead of just output.mjpeg)
I need to simultaneously stream/broadcast (over rtmp) and save video (with audio) from my USB webcam. The webcam is Logitech c920 which have hardware h.264 encoder.
I don't want to reencode the media, so I'm using the -c:v copy option.
The whole script looks like below:
#! /bin/bash
SOURCEV="/dev/video0"
SOURCEA="hw:1"
FILE_TO_SAVE="Archive/file_to_save.mp4"
YOUTUBE_URL="rtmp://x.rtmp.youtube.com/live2"
KEY="my-secret-key"
avconv -f alsa -ac 2 -r 44100 -i $SOURCEA \
-s 1920x1080 -r 24 -c:v h264 -i "$SOURCEV" \
-ar "44100" -r:v 24 -c:a aac -c:v copy -s 1920x1080 -f mp4 "$FILE_TO_SAVE" \
-g $FPS*4 -ar "44100" -b:a "128k" -ac 2 -r 24 -c:a aac -c:v copy -s 1920x1080 -f flv "$YOUTUBE_URL/$KEY"
This method "works" - it means' it can stream content and save it to disk, but the problem with this method is that file video relies on the stream. For example if the Internet connection is too slow, the saved file will have low FPS. If the Internet connection is interrupted the "recording" of video file is stopped.
Can anyone help me with making this two streams independent?
The whole things is happening on raspberrypi 3 so computing power is highly limited.
Try to install nginx + nginx-rtmp locally and stream to it. In options of server enable saving to local files. And launch other script to re-stream to youtube.
I have created a virtual alsa loopback device and trying to open a youtube link in google-chrome or chromium-browser and trying to send it's audio output to that virtual device. Then using ffmpeg I am trying to capture the audio. But no matter what I do, chrome or chromium always send audio output to default built-in speakers. If I open volume control panel and change the output of the application in playback section to loopback then it works. But my requirement to be able to do it programatically by telling the chrome to send audio on which device.
Following are the commands which I tried to make it happen:
google-chrome --window-position=0,0 --window-size=1920,1080 --alsa-output-device=alsa_output.1.analog-stereo.monitor -kiosk https://www.youtube.com/watch?v=LTbnmiXWs2k
google-chrome --window-position=0,0 --window-size=1920,1080 --alsa-output-device=hw:1,0 -kiosk https://www.youtube.com/watch?v=LTbnmiXWs2k
And following is the ffmpeg command which is working fine:
ffmpeg -f pulse -i alsa_output.1.analog-stereo.monitor -ac 1 -ar 16000 test.wav
Any help will be appreciated.
I have also suffered from the same problem, as Google didn't provide the complete format for ALSA output usage, so you can follow below procedure,
list out the plug hardware:
aplay -L
null
Discard all samples (playback) or generate zero samples (capture)
pulse
PulseAudio Sound Server
default:CARD=PCH
HDA Intel PCH, ALC662 rev3 Analog
Default Audio Device
sysdefault:CARD=PCH
HDA Intel PCH, ALC662 rev3 Analog
Default Audio Device
front:CARD=PCH,DEV=0
HDA Intel PCH, ALC662 rev3 Analog
Front speakers
surround21:CARD=PCH,DEV=0
HDA Intel PCH, ALC662 rev3 Analog
2.1 Surround output to Front and Subwoofer speakers
Now select your plug hardware from the list and add its name to --alsa-output-device=
--alsa-output-device='plug:surround21'
so your complete command will look like below,
google-chrome --window-position=0,0 --window-size=1920,1080 --alsa-output-device='plug:surround21' -kiosk https://www.youtube.com/watch?v=LTbnmiXWs2k
I faced the same problem while attempting to do the same. Here's what actually worked:
chrome (...) --alsa-output-device=hw:0,0
ffmpeg -f alsa -ac 2 -i hw:0,1,1 test.wav
This has chrome using (card 0, device 0) for output, which is looped back to (card 0, device 1, substream 1). The format is --alsa-output-device=hw:card,device
The opposite also works:
chrome (...) --alsa-output-device=hw:0,1
ffmpeg -f alsa -ac 2 -i hw:0,0,1 test.wav
Selecting the substream (e.g. --alsa-output-device=hw:0,1,4) is impossible. If capturing with ffmpeg just assume substream 1.
I'm currently developing an application which will enable visualizing images from different sources (mostly IP cameras) in browser (in a HTML5 video element). The UI will allow for matrix view so, normally 16 or more cameras will be displayed at the same time.
From cameras I get MJPEG streams or JPEG images (which I "convert" to MJPEG streams). So, for a camera, I have an MJPEG stream which I set as input for ffmpeg. I instruct ffmpeg to convert this to MP4 & H.264, and expose the output as a tcp stream, like this:
ffmpeg -f mjpeg -i "http://localhost/video.mjpg" -f mp4 -vcodec libx264 "tcp://127.0.0.1:5001?listen"
This works just fine on localhost, I get the stream displayed in the web page, at best quality.
But this has to work in various network conditions. I played a bit with chrome throttling settings, and noticed that if the network speed is just a bit below the required speed (given by the current compression settings I use in ffmpeg), the things start to go wrong: from stream start being delayed (so, no longer a live stream), up to complete freeze of 'live' image in browser.
What I need is an "adaptive" way to do the compression, in relation with current network speed.
My questions are:
is ffmpeg able to handle this, to adapt to network conditions - automatically reduce compression quality when speed is low; so the image in browser will be lower quality, but live (which is most important in my case)
if not, is there a way to workaround this?
is there a way to detect the network bottleneck? (and then restart ffmpeg with lower compression parameters; this is not a dynamic adaptive streaming, but better than nothing)
Thank you in advance!
Your solution do not work out of the local network. Why? because you must to use HTTP. For that the best solution is use HLS or DASH.
HLS
ffmpeg -i input.mp4 -s 640x360 -start_number 0 -hls_time 10 -hls_list_size 0 -f hls index.m3u8
To generate adaptive streams you have to create an second level index. I do not explain here becaouse it is really clear in Apple doumentation: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40008332-CH1-SW1
and in the standard: https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-18
DASH
At the moment the FFMPEG not support Dash encoding. You can segment with FFMPEG ( [https://www.ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment][1] ) but i recomend that combine the FFMPEG and MP4Box. FFMPEG to transcode your live video and the MP4Box to segment and create the index .mpd.
MP4Box is a part of GPAC ( [http://gpac.wp.mines-telecom.fr/][2] )
An example can be (using h264) - If you need vp8 (webm, use -vcodec libvpx and -f webm or -f ts ):
ffmpeg -threads 4 -f v4l2 -i /dev/video0 -acodec libfaac -ar 44100 -ab 128k -ac 2 -vcodec libx264 -r 30 -s 1280x720 -f mp4 -y "$movie" > temp1.mp4 && MP4Box -dash 10000 -frag 1000 -rap "$movie"