Display received gstreamer mjpeg on HTML page - html

I am receiving with Gstreamer an mjpg stream and I have not been able to display it in a simple HTML5 page.
The send command:
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! jpegenc ! rtpjpegpay ! udpsink host=<IP> port=5600
The receive command:
gst-launch-1.0 udpsrc port=5600 ! application/x-rtp,encoding-name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! autovideosink
The receive command works fine opening a new window that shows the stream as expected.
However, I was not able to find a way of displaying it in an HTML page.
Do you have any suggestions of what to look for since I am new in the media streaming field?

Related

Newb Shell - How to get response in a variable

Just a quick question to solve an issue I've been facing for days now: how to get an wget json response in a shell variable?
I have so far a wget command like this:
wget "http://IP:PORT/webapi/auth.cgi?account=USER&passwd=PASSWD"
The server reponse is normally something like:
{"data":{"sid":"9O4leaoASc0wgB3J4N01003"},"success":true}
What I'd like to do is to grep the sid value in a variable (as it is used as login ticket), but also the success value in order to ensure that the command has been executed correctly...
I think it is a very easy command to build, but I've never practised wget/http reponse in shell command...
Thanks a lot for your help!
EDIT: Thanks for your help. I did gave a try to both answers, but I am having the same error message (whatever I do):
--2022-07-16 14:21:38-- http://xxxxxxxx:port/webapi/auth.cgi?api=SYNO.API.Auth&method=Login&version=3&account=USER&passwd=PWD&session=SurveillanceStation&format=sid
Connecting to 192.168.1.100:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
auth.cgi?api=SYNO.API.Auth&method=Login&version=3&account=USER&passwd=PASSWD&session=SurveillanceStation&format=sid: Permission denied
Cannot write to `auth.cgi?api=SYNO.API.Auth&method=Login&version=3&account=USER&passwd=PASSWD&session=SurveillanceStation&format=sid' (Permission denied).
The annoying thing: execution the URL from a web browser works just fine... :/
You can first store the result of wget command in variable and then use it:
VAR=$(wget "http://IP:PORT/webapi/auth.cgi?account=USER&passwd=PASSWD")
and then using jq extract from JSON file:
sid=$(echo $VAR|jq .data.sid)
success=$(echo $VAR|jq .success)
If you have problem with execution of wget you can try something like:
wget -O output_file 'http://xxxxxxxx:port/webapi/auth.cgi?api=SYNO.API.Auth&method=Login&version=3&account=USER&passwd=PWD&session=SurveillanceStation&format=sid'
and then set variables:
sid=$(jq .data.sid output_file )
success=$(jq .success output_file )
I do not know why I am facing this Permission Denied error. Thus I gave a try to save cookie on a dedicated folder... And it works just fine :)
The final command lloks like:
VAR=$(wget -q --keep-session-cookies --save-cookies "/var/tmp/cookie_tmp" -O- "http://IP:PORT/webapi/auth.cgi?api=SYNO.API.Auth&method=login&version=1&account=USER&passwd=PWD&session=SurveillanceStation");
Thanks for your help (I learned a lot about sed ;) )
So this can be done using the stream editor or "sed". There is a lot to learn but for this post here is an idea of a code:
sid=$(wget <your url> | sed 's/.*sid":"\(.*\)"},.*/\1/')
success=$(wget <your url> | sed 's/.*success":\(.*\)}/\1/')
This will create 2 variables $sid and $success.
you can learn more about sed in depth here.
Hope this helped!

Realtime streaming video to HTML5 with RaspberryPi, gstreamer

I'm trying to make a live stream from a Raspberry camera available on a HTML5 webpage. Because of combination of factors, I would like to stream it to an outside server pc(Server pc os is window7) and this server should be able to supply the streams to the webpage HTML.
I'm able to get the stream from the Raspberry Pi and stream it with Gstreamer to an external server like this:
Raspberry Pi:
raspivid -n -t 0 -rot 270 -w 960 -h 720 -fps 30 -b 2000000 -o - | gst-launch-1.0 - e -vvvv fdsrc ! h264parse ! rtph24pay pt=96 config-interval=1 ! udpsink host=External IP port=5000
External server
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
As a result I could display live video stream through gstreamer(GStreamer D3D video sink) in external server pc.
Now I have a problem:
I want to display this as HTML 5 video with Apache on server side (PC) instead of GStreamer D3D video output.
I searched for this solution for a long time but I couldn't find anything.

GStreamer + V4L2loopback as Chrome compatible webcam

I am trying to create a virtual camera in Chrome using v4l2loopback where the incoming video is H264 via RTP.
I have has some success in getting a GStreamer test video recognized in Chrome with MediaStreamTrack.getSources:
$ sudo modprobe v4l2loopback
$ gst-launch-1.0 videotestsrc ! v4l2sink device=/dev/video0
This works well, Chrome will display the video test source.
However, when I use an incoming h264/RTP source the device does not show up in MediaStreamTrack.getSources. For example:
gst-launch-1.0 -v tcpclientsrc host=<IPADDRESS> port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! v4l2sink device=/dev/video0
What is the reason for this? What would the solution be?
I had thought perhaps this is to do with the video formats and perhaps the correct caps needed to be set through v4l2loopback.
This looks like a bug in gstreamer or v4l2loopback. It is somehow related to how variable frame rate is handled.
I managed to reproduce it in this way:
Start pipeline transmitting video from network to /dev/video0
$ gst-launch-1.0 -v tcpserversrc port=5000 \
! gdpdepay ! rtph264depay \
! decodebin \
! v4l2sink device=/dev/video0
Start pipeline transmitting some video to port 5000
$ gst-launch-1.0 -v videotestsrc \
! x264enc ! rtph264pay ! gdppay \
! tcpserversink port=5000
Try to get video from /dev/video0
$ gst-launch v4l2src device=/dev/video0 ! autovideosink
...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video1' is not a capture device.
Now, note the caps for v4l2sink in the debug log of the first pipeline.
/GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0.GstPad:sink: caps = video/x-raw, format=(string)I420, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, colorimetry=(string)bt601, framerate=(fraction)0/1
It mentions that framerate=(fraction)0/1. In gstreamer's terms this means that frame rate is variable. According to v4l2sink's source code it seems that it feed this same frame rate to v4l2loopback kernel module but v4l2loopback does not understand zero frame rate.
(This is only hypothesis, still need to check if this is what really happens.)
To workaround this bug you can fix frame rate. Just add videorate element to the first pipeline:
$ gst-launch-1.0 -v tcpserversrc port=5000 \
! gdpdepay ! rtph264depay \
! decodebin \
! videorate ! video/x-raw, framerate=25/1 \
! v4l2sink device=/dev/video0

I get rtp stream via VLC but not with gstreamer pipeline

I'm trying to get the rtp stream from a DM365 Board.
With VLC there is no problem. Stream can be opened with sdp file.
It is a camera view encoded with TI specific h264 encoder (TIVidenc1 codecName=h264enc) and sound.
I'm developing an application and i want to use gstreamer.
I build a gstreamer pipeline to embedd later video in my app. but I can't open the stream with this pipeline.
on ubuntu
client pipeline
gst-launch -v gstrtpbin name=rtpbin latency=200 \
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" port=5000 ! rtpbin.recv_rtp_sink_0 \
rtpbin. ! rtph264depay ! decodebin ! xvimagesink \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! udpsink port=5005 host=192.168.231.14 sync=false async=false \
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMA" port=5002 ! rtpbin.recv_rtp_sink_1 \
rtpbin. ! rtppcmadepay ! decodebin ! audioconvert ! audioresample ! alsasink \
udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \
rtpbin.send_rtcp_src_1 ! udpsink port=5007 host=192.168.231.14 sync=false async=false
Sender is DM365 the pipeline is as follow :
SENDER
gst-launch-0.10 gstrtpbin name=rtpbin
v4l2src always-copy=FALSE input-src=composite ! queue !
TIVidResize contiguousInputFrame=FALSE ! 'video/x-raw-yuv,width=608,height=384,format=(fourcc)NV12,bitRate=48100' !
TIVidenc1 codecName=h264enc engineName=encode contiguousInputFrame=TRUE ! rtph264pay ! queue !
rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! udpsink port=5000 host=192.168.231.255 ts-offset=0 name=vrtpsink rtpbin.send_rtcp_src_0 ! udpsink port=5001 host=192.168.231.255 sync=false async=false name=vrtcpsink udpsrc port=5005
name=vrtpsrc !
rtpbin.recv_rtcp_sink_0 alsasrc ! queue !
alawenc ! rtppcmapay ! queue !
rtpbin.send_rtp_sink_1 rtpbin.send_rtp_src_1 ! udpsink port=5002 host=192.168.231.255 ts-offset=0 name=artpsink rtpbin.send_rtcp_src_1 ! udpsink port=5003 host=192.168.231.255 sync=false async=false name=artcpsink udpsrc port=5007 name=artpsrc ! rtpbin.recv_rtcp_sink_1";
I solved it
One need to tell udp caps information from sender side to the updsrc on client side.
When you generate a pipeline to send video you get caps information from your sender udp element on the terminal.
Just add this to your udpsrc caps="...." and it works.

mailto: open google chrome in Ubuntu with notify msg

I'm writing a small script to open mailto links from webpages in google chrome small app window:
so far I have this:
#!/bin/sh
notify-send "Opening Gmail" "`echo $1`" -i /usr/local/share/icons/hicolor/48x48/apps/google-chrome.png -t 5000
google-chrome -app="https://mail.google.com/mail/?extsrc=mailto&url=`echo $1`"
which works nice - however I'd like to add the email recipient to the notification - something like this - but I need a regex to get the email from the mailto link - which might contain subjects and such..
#!/bin/sh
$str = preg_replace('#<a.+?href="mailto:(.*?)".+?</a>#', "$1", $str);
notify-send "Opening Gmail" "`echo $str`" -i /usr/local/share/icons/hicolor/48x48/apps/google-chrome.png -t 5000
google-chrome -app="https://mail.google.com/mail/?extsrc=mailto&url=`echo $1`"
this does not work..
any ideas?
UPDATE: here's the working code:
#!/bin/sh
str=$(echo $1|sed 's/.*mailto:\([^?]*\)?.*/\1/')
notify-send "Opening Gmail" "to: `echo $str`" -i /usr/local/share/icons/hicolor/48x48/apps/google-chrome.png -t 5000
google-chrome -app="https://mail.google.com/mail/?extsrc=mailto&url=`echo $1`"
If you write it like this, it's not shell:)
Can you provide the sample string to use regex unto? Basically it will be sed invocation, that shall cut everything but the address. Although the mail address according to the RFC can be quite complicated, so the simple approach will work in most of the cases, but not every time.
Try to start from something like
sed 's/.*mailto:\([^?]*\)?.*/\1/'
So you might want to use it like this:
str=$(echo $1|sed 's/.*mailto:\([^?]*\)?.*/\1/')
Great! I got your script and made some change to work better, look:
#!/bin/sh
str=$(echo $1|sed 's/.*mailto:\([^?]*\)?.*/\1/')
notify-send "Abrindo Gmail" "to: `echo $str`" -i /usr/local/share/icons/hicolor/48x48/apps/google-chrome.png -t 5000
chromium-browser "https://mail.google.com/mail/?view=cm&fs=1&tf=1&source=mailto&to=$1"