I am trying to create a virtual camera in Chrome using v4l2loopback where the incoming video is H264 via RTP.
I have has some success in getting a GStreamer test video recognized in Chrome with MediaStreamTrack.getSources:
$ sudo modprobe v4l2loopback
$ gst-launch-1.0 videotestsrc ! v4l2sink device=/dev/video0
This works well, Chrome will display the video test source.
However, when I use an incoming h264/RTP source the device does not show up in MediaStreamTrack.getSources. For example:
gst-launch-1.0 -v tcpclientsrc host=<IPADDRESS> port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! v4l2sink device=/dev/video0
What is the reason for this? What would the solution be?
I had thought perhaps this is to do with the video formats and perhaps the correct caps needed to be set through v4l2loopback.
This looks like a bug in gstreamer or v4l2loopback. It is somehow related to how variable frame rate is handled.
I managed to reproduce it in this way:
Start pipeline transmitting video from network to /dev/video0
$ gst-launch-1.0 -v tcpserversrc port=5000 \
! gdpdepay ! rtph264depay \
! decodebin \
! v4l2sink device=/dev/video0
Start pipeline transmitting some video to port 5000
$ gst-launch-1.0 -v videotestsrc \
! x264enc ! rtph264pay ! gdppay \
! tcpserversink port=5000
Try to get video from /dev/video0
$ gst-launch v4l2src device=/dev/video0 ! autovideosink
...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video1' is not a capture device.
Now, note the caps for v4l2sink in the debug log of the first pipeline.
/GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0.GstPad:sink: caps = video/x-raw, format=(string)I420, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, colorimetry=(string)bt601, framerate=(fraction)0/1
It mentions that framerate=(fraction)0/1. In gstreamer's terms this means that frame rate is variable. According to v4l2sink's source code it seems that it feed this same frame rate to v4l2loopback kernel module but v4l2loopback does not understand zero frame rate.
(This is only hypothesis, still need to check if this is what really happens.)
To workaround this bug you can fix frame rate. Just add videorate element to the first pipeline:
$ gst-launch-1.0 -v tcpserversrc port=5000 \
! gdpdepay ! rtph264depay \
! decodebin \
! videorate ! video/x-raw, framerate=25/1 \
! v4l2sink device=/dev/video0
Related
I'm creating a CI flow that uses appium and iOs simulator in macos-latest. My app will change language along with simulator language. I found that edit .GlobalPreferences.plist file and then boot the simulator will change to Japanese but the simulator still get default language (en)
Nodejs : 16
Java: 11
Appium: 1.22.3
MacOs: latest
iOs Runtime: 12.4
Device: IphoneX - Simulator
xcrun simctl create TestiPhone com.apple.CoreSimulator.SimDeviceType.iPhone-X com.apple.CoreSimulator.SimRuntime.iOS-12-4 > deviceid.txt
DEVICEUUID=`cat deviceid.txt`
echo $DEVICEUUID
plutil -p ~/Library/Developer/CoreSimulator/Devices/$DEVICEUUID/data/Library/Preferences/.GlobalPreferences.plist
plutil -replace AppleLocale -string "ja_US" ~/Library/Developer/CoreSimulator/Devices/$DEVICEUUID/data/Library/Preferences/.GlobalPreferences.plist
plutil -replace AppleLanguages -json "[ \"ja\" ]" ~/Library/Developer/CoreSimulator/Devices/$DEVICEUUID/data/Library/Preferences/.GlobalPreferences.plist
echo "Verify locale and language ~ JP"
plutil -p ~/Library/Developer/CoreSimulator/Devices/$DEVICEUUID/data/Library/Preferences/.GlobalPreferences.plist
xcrun simctl boot $DEVICEUUID
xcrun simctl bootstatus $DEVICEUUID
xcrun simctl install booted /Users/runner/work/appiumclonetest/appiumclonetest/BuildFiles/mobile.app
When I use iOS 15.0, .GlobalPreferences.plist file does not exist in ~/Library/Developer/CoreSimulator/Devices/$DEVICEUUID/data/Library/Preferences. Where can I found it ?
Can I change simulator language by edit .GlobalPreferences.plist file or do I need to change other things to make it work? I also search for similar discussions but no luck.
Thanks
I am receiving with Gstreamer an mjpg stream and I have not been able to display it in a simple HTML5 page.
The send command:
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! jpegenc ! rtpjpegpay ! udpsink host=<IP> port=5600
The receive command:
gst-launch-1.0 udpsrc port=5600 ! application/x-rtp,encoding-name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! autovideosink
The receive command works fine opening a new window that shows the stream as expected.
However, I was not able to find a way of displaying it in an HTML page.
Do you have any suggestions of what to look for since I am new in the media streaming field?
I am using qemu emulator for aarch64 and want to create an outside checkpoint (or fast forwarding) to save all I need to restart the system just from the point when I create checkpoint. (In fact, I want to skip the booting step) I only found something on qemu VM snapshot and fast forwarding, but it does not work for the emulator. Is there any checkpoint function for qemu emulator?
A savevm snapshot should do what you want. The short answer is that you need to set up a QCOW2 disk for the snapshots to be saved to, and then in the monitor you can use the 'savevm' command to take the snapshot. Then the command line '-loadvm' option will let you resume from there. This all works fine in emulation of AArch64.
https://translatedcode.wordpress.com/2015/07/06/tricks-for-debugging-qemu-savevm-snapshots/ has a more detailed tutorial.
Minimal example
Peter's answer just worked for me, but let me provide a fully reproducible example.
I have fully automated everything at: https://github.com/cirosantilli/linux-kernel-module-cheat/tree/1e0f0b492855219351b0bfa2eec4d3a6811fcaaa#snapshot
The key step is to convert the image to qcow2 as explained at: https://docs.openstack.org/image-guide/convert-images.html
cd buildroot/output.x86_64~/images
qemu-img convert -f raw -O qcow2 rootfs.ext2 rootfs.ext2.qcow2
And the final QEMU command used was:
./buildroot/output.x86_64~/host/usr/bin/qemu-system-x86_64 -m 128M -monitor telnet::45454,server,nowait -netdev user,hostfwd=tcp::45455-:45455,id=net0 -smp 1 -M pc -append 'root=/dev/vda nopat nokaslr norandmaps printk.devkmsg=on printk.time=y console=ttyS0' -device edu -device virtio-net-pci,netdev=net0 -drive file=./buildroot/output.x86_64~/images/rootfs.ext2.qcow2,if=virtio,format=qcow2 -kernel ./buildroot/output.x86_64~/images/bzImage -nographic
To test it out, login into the VM, and run:
i=0; while true; do echo $i; i=$(($i + 1)); sleep 1; done
Then on another shell, open the monitor:
telnet localhost 45454
savevm my_snap_id
The counting continues. Then, if we load the vm:
loadvm my_snap_id
the counting goes back to where we saved. This shows that CPU and memory states were reverted.
We can also verify that the disk state is also reversed. Guest:
echo 0 >f
Monitor:
savevm my_snap_id
Guest:
echo 1 >f
Monitor:
loadvm my_snap_id
Guest:
cat f
And the output is 0.
I'm trying to make a live stream from a Raspberry camera available on a HTML5 webpage. Because of combination of factors, I would like to stream it to an outside server pc(Server pc os is window7) and this server should be able to supply the streams to the webpage HTML.
I'm able to get the stream from the Raspberry Pi and stream it with Gstreamer to an external server like this:
Raspberry Pi:
raspivid -n -t 0 -rot 270 -w 960 -h 720 -fps 30 -b 2000000 -o - | gst-launch-1.0 - e -vvvv fdsrc ! h264parse ! rtph24pay pt=96 config-interval=1 ! udpsink host=External IP port=5000
External server
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
As a result I could display live video stream through gstreamer(GStreamer D3D video sink) in external server pc.
Now I have a problem:
I want to display this as HTML 5 video with Apache on server side (PC) instead of GStreamer D3D video output.
I searched for this solution for a long time but I couldn't find anything.
I'm trying to get the rtp stream from a DM365 Board.
With VLC there is no problem. Stream can be opened with sdp file.
It is a camera view encoded with TI specific h264 encoder (TIVidenc1 codecName=h264enc) and sound.
I'm developing an application and i want to use gstreamer.
I build a gstreamer pipeline to embedd later video in my app. but I can't open the stream with this pipeline.
on ubuntu
client pipeline
gst-launch -v gstrtpbin name=rtpbin latency=200 \
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" port=5000 ! rtpbin.recv_rtp_sink_0 \
rtpbin. ! rtph264depay ! decodebin ! xvimagesink \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! udpsink port=5005 host=192.168.231.14 sync=false async=false \
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMA" port=5002 ! rtpbin.recv_rtp_sink_1 \
rtpbin. ! rtppcmadepay ! decodebin ! audioconvert ! audioresample ! alsasink \
udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \
rtpbin.send_rtcp_src_1 ! udpsink port=5007 host=192.168.231.14 sync=false async=false
Sender is DM365 the pipeline is as follow :
SENDER
gst-launch-0.10 gstrtpbin name=rtpbin
v4l2src always-copy=FALSE input-src=composite ! queue !
TIVidResize contiguousInputFrame=FALSE ! 'video/x-raw-yuv,width=608,height=384,format=(fourcc)NV12,bitRate=48100' !
TIVidenc1 codecName=h264enc engineName=encode contiguousInputFrame=TRUE ! rtph264pay ! queue !
rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! udpsink port=5000 host=192.168.231.255 ts-offset=0 name=vrtpsink rtpbin.send_rtcp_src_0 ! udpsink port=5001 host=192.168.231.255 sync=false async=false name=vrtcpsink udpsrc port=5005
name=vrtpsrc !
rtpbin.recv_rtcp_sink_0 alsasrc ! queue !
alawenc ! rtppcmapay ! queue !
rtpbin.send_rtp_sink_1 rtpbin.send_rtp_src_1 ! udpsink port=5002 host=192.168.231.255 ts-offset=0 name=artpsink rtpbin.send_rtcp_src_1 ! udpsink port=5003 host=192.168.231.255 sync=false async=false name=artcpsink udpsrc port=5007 name=artpsrc ! rtpbin.recv_rtcp_sink_1";
I solved it
One need to tell udp caps information from sender side to the updsrc on client side.
When you generate a pipeline to send video you get caps information from your sender udp element on the terminal.
Just add this to your udpsrc caps="...." and it works.