openRTSP file per frame timing information - h.264

I have IP 3 cameras that I am recording h264 video feed using RTSP/RTP. I am using openRTSP, on Ubuntu 20.04, to record the video feed using the -m option which allows me to get a h264 file/frame. The filename includes a UNIX time stamp so that should allow me to align the three cameras within a small margin of error. However, I am struggling to do this at the moment.
Is the UNIX time stamp in the filename derived from the arrival time of each RTSP
frame over the network or is the timing obtained directly from the camera (somehow) ?

Related

FFMPEG HLS streaming and transcoding on the fly to HTML player - video duration changes while transcoding

I am trying to make a video streaming server and watch videos directly from web browser. The idea is to make the server to stream video from remote server, transcode with different audio format in local server, and then instantly stream to the client (this is specific way I need it to function).
This is the FFMPEG code im currently using:
ffmpeg -i "url" -c:v copy -c:a aac -ac 2 -f hls -hls_time 60 -hls_playlist_type event -hls_flags independent_segments out.m3u8
The HLS stream is attached to the HTML player with hls.js and it works. However, the video duration is constantly changing while video is being transcoded. I have tried to change video duration with JS like $('video').duration = 120;with no luck.
How do i make the player to display the video file duration instead of stream current transcoded time?
I am also planning to implement video seeking but i am clueless. The current idea is to send seeking time to the server, terminate ffmpeg, and start from specific time. However, i think the player might get stuck on loading and will not start playing without reloading.
Ffmpeg can’t write segments to the manifest before they are on disk. You will need to wait for ffmpeg to finish If you don’t want the “live like” behavior during media preparation.

Measure application stats with chrome dev tool?

I am using dev tool(mainly network tab ) provided by chrome . I know it provides a lot of stats Chrome Dev tool but I need to measure below stats
1. Time taken by webserver to process the web request ?
I know there chrome provides Waiting(TTFB) but that is Time To First Byte which means the time the server took to prepare the response and time taken by data to travel from server to browser
2. Time taken by data to transfer from server to browser ?
I see Content Download stats . Is it the network travel time ? If yes Is it
`Time taken by webserver to process the web request = TTFB - Content Download`
3. Time taken by browser to render data ?
Also in below image I see three stats Finish, DOMContentLoaded, Load ?
Mine understanding is
Finish :- Complete time taken by request from start to finish
DOMContentLoaded :- Time tiken by request to the point when DOMContentLoaded is done(i.e. HTML is rendered alongwith JS/CS files downloaded)
Load :- Time tiken by request to the point when onLoad event is fired

Check end of transcoding for vimeo-api?

I need to be sure by an API request that the transcoding of an uploaded video is full ended and all possible renditions are available.
I know that that the API give responses in [body][status] about
available, uploading, transcoding, uploading_error, transcoding_error
but the problem is that the status changes from 'transcoding' to 'available' in the second when the first rendition is transcoded. So how can I check by an API-Request that vimeos work has full ended and no other rendition will be added to the video in the next minutes.
Thanks
If you are a PRO user and there is a specific size you are looking for you should wait for the file to appear in the files list. Waiting for "all" is not recommended because we are constantly changing and improving the list of files available.
If you are not a PRO user, this information is not exposed at all. Once the SD version is available you will be able to generate embed codes, and shortly after that your player will have the ability to switch into HD mode (once the HD file is available).

Real Time Streaming to HTML5 (with out webrtc) just using video tag

I would like to wrap real time encoded data to webm or ogv and send it to an html5 browser.
Can webm or ogv do this,
Mp4 can not do this due to its MDAT atoms. (one can not wrap h264 and mp3 in real time and wrap it and send it to the client)
Say I am feeding the input from my webcam and audio from my built in mic.
Fragmented mp4 can handle this but its an hassle to find libs to do that).
I need to do this cuz I do not want to send audio and video separably.
If I did send it separably, sending audio over audio tag and video over video>(audio and video are demuxed and sent)
Can I sync them on client browser with javascript. I saw some examples but not sure yet.
I did this with ffmpeg/ffserver running on Ubuntu as follows for webm (mp4 and ogg are a bit easier, and should work in a similar manner from the same server, but you should use all 3 formats for compatibility across browsers).
First, build ffmpeg from source to include the libvpx drivers (even if your using a version that has it, you need the newest ones (as of this month) to stream webm because they just did add the functionality to include global headers). I did this on an Ubuntu server and desktop, and this guide showed me how - instructions for other OSes can be found here.
Once you've gotten the appropriate version of ffmpeg/ffserver you can set them up for streaming, in my case this was done as follows.
On the video capture device:
ffmpeg -f video4linux2 -standard ntsc -i /dev/video0 http://<server_ip>:8090/0.ffm
The "-f video4linux2 -standard ntsc -i /dev/video0" portion of that may change depending on your input source (mine is for a video capture card).
Relevant ffserver.conf excerpt:
Port 8090
#BindAddress <server_ip>
MaxHTTPConnections 2000
MAXClients 100
MaxBandwidth 1000000
CustomLog /var/log/ffserver
NoDaemon
<Feed 0.ffm>
File /tmp/0.ffm
FileMaxSize 5M
ACL allow <feeder_ip>
</Feed>
<Feed 0_webm.ffm>
File /tmp/0_webm.ffm
FileMaxSize 5M
ACL allow localhost
</Feed>
<Stream 0.mpg>
Feed 0.ffm
Format mpeg1video
NoAudio
VideoFrameRate 25
VideoBitRate 256
VideoSize cif
VideoBufferSize 40
VideoGopSize 12
</Stream>
<Stream 0.webm>
Feed 0_webm.ffm
Format webm
NoAudio
VideoCodec libvpx
VideoSize 320x240
VideoFrameRate 24
AVOptionVideo flags +global_header
AVOptionVideo cpu-used 0
AVOptionVideo qmin 1
AVOptionVideo qmax 31
AVOptionVideo quality good
PreRoll 0
StartSendOnKey
VideoBitRate 500K
</Stream>
<Stream index.html>
Format status
ACL allow <client_low_ip> <client_high_ip>
</Stream>
Note this is configured for a server at feeder_ip to execute the aforementioned ffmpeg command, and for the server at server_ip so server to client_low_ip through client_high_ip while handling the mpeg to webm conversation on server_ip (continued below).
This ffmpeg command is executed on the machine previously referred to as server_ip (it handles the actual mpeg --> webm conversion and feeds it back into the ffserver on a different feed):
ffmpeg -i http://<server_ip>:8090/0.mpg -vcodec libvpx http://localhost:8090/0_webm.ffm
Once these have all been started up (first the ffserver, then the feeder_ip ffmpeg process then then the server_ip ffmpeg process) you should be able to access the live stream at http://:8090/0.webm and check the status at http://:8090/
Hope this helps.
Evren,
Since you have asked this question initially, the Media Source Extensions
https://www.w3.org/TR/media-source/ have matured enough to be able to play very short (30ms) ISO-BMFF video/mp4 segments with just a little buffering.
Refer to
HTML5 live streaming
So your statement
(one can not wrap h264 and mp3 in real time and wrap it and send it to the client)
is out of date now. Yes you can do it with h264 + AAC.
There are several implementations out there; take a look at Unreal Media Server.
From Unreal Media Server FAQ: http://umediaserver.net/umediaserver/faq.html
How is Unreal HTML5 live streaming different from MPEG-DASH?
Unlike MPEG-DASH, Unreal Media Server uses a WebSocket protocol for live streaming to HTML5 MSE element in web browsers. This is much more efficient than fetching segments via HTTP requests per MPEG-DASH. Also, Unreal Media Server sends segments of minimal duration, as low as 30 ms. That allows for low, sub-second latency streaming, while MPEG-DASH, like other HTTP chunk-based live streaming protocols, cannot provide low latency live streaming.
Their demos webpage has a live HTML5 feed from RTSP camera:
http://umediaserver.net/umediaserver/demos.html
Notice that the latency in HTML5 player is comparable to that in Flash player.
Not 100% sure you can do this. HTML5 has not ratified any live streaming mechanism. You could use websockets and send data in real time to the browser to do this. But you have to write the parsing logic yourself and I do not know how you will feed the data as it arrives to the player.
As for video and audio tag: Video tag can play container files that have both audio and video. So wrap your content in a container that is compatible. If you modify your browser to write your live streaming to this video file as the live content keeps coming in and stream out that data for every byte requested by the browser this could be done. But it is definitely non trivial.

Record Video from Browser using Webcam and Microfone inputs

I need to record a video through user browser using input from camera and microphone and send to my server. Since html5 still doesn't make that magic happen, I'm looking for flash solutions.
Do I really need some flash media server to do that, or can I do a POST request?
I want to get both inputs(webcam and microphone), put them in a .flv and send to my server.
I've seen some implementations using bytearrays to record and send, audio and video separated. The problem is that it generates a series of synchronization problems when you try to compose them in a single file.
If you're still looking for a solution check out:
http://framebase.io
They have an embed-able recording widget that can transcode the videos automatically. I'd check out the docs, but on success, you can run an API call to check the status of transcoding and download it to your server or you can just use your own S3 bucket.