Issues with RTSP video in browser using Streamedian - html

I have a system where I need to be able to view 4 rtsp feeds with a web browser in and amongst other web content. The feeds are coming from "Amcrest ProHD 1080P PTZ Camera Outdoor, 2MP Outdoor Vandal Dome IP PoE Camera" units. All of the connectivity is wired within the same intranet. I'm viewing the cameras on Insignia Amazon Fire TVs, and I'm not getting the results that I expected.
In order to get it working, I installed streamedian on my Windows machine and got that configured properly. I didn't change much about the default configuration of the proxy server except the port number it was using the serve the feeds on. On the browser side, I included the necessary JavaScript files (again, only changing the port numbers) and made my call to load the players. Currently, that code looks like this:
var playerOptions = {
socket: 'ws://192.168.1.253:8080/ws/',
redirectNativeMediaErrors: true,
bufferDuration: 5,
continuousFileLength: 5000,
eventFileLength: 5000,
errorHandler: function (err) {
console.log(err.message);
},
};
setTimeout(function () {
player1 = Streamedian.player("p1", playerOptions);
player2 = Streamedian.player("p2", playerOptions);
player3 = Streamedian.player("p3", playerOptions);
player4 = Streamedian.player("p4", playerOptions);
}, 100);
Each of the players look like this in the HTML:
<video id="p1" controls autoplay>
<source src="rtsp://path.to.camera" type="application/x-rtsp">
</video>
The players do load, and appear to be attempting to start the stream. However, they quickly start buffering and never come out of it to start showing anything resembling a live feed. Any ideas on what I can do to try to get this running? Am I even heading down a valid path to get where I need to be?
I'm not worried about rewind, or anything in the past. This is purely for live streaming and only needs to show "now". Saving anything more than necessary is not required in order to meet the needs.
I've tried dropping the frame rate to 4fps. It didn't seem to help.
The Windows machine has average power at best. I've opened Task Manager while the streams are running, but nothing appears to be pegged out. Am I just running an under-powered machine for what I'm asking it to do?
Are the browsers in the TV, or the TV itself, just not capable of getting it done and processing that much information? I've tried viewing the pages on my laptop as well (separate from the Windows server, and plenty powerful), but the results do not improve.
Should I be trying something else on my server side? I came across iSpyConnect this evening as I was searching for other (preferably free) options, but I can't tell for sure if it will get me what I need. I've also started looking at signboard applications for the TVs, but they all have a cost associated with them that I'd like to avoid if at all possible.
If anyone has any thoughts, ideas, comments, suggestions or questions about what I have in place, please let me know so that I can try to get something up and running within the facility.
Thanks!

Related

How to circumvent missing .stop() implementation of RTCRtpTransceiver in Chrome / Chromium

I currently try to build a webrtc application that uses the RTCRtpTransceiver-Objects of the WebRTC standard. I add video and audio tracks to the connection and then, some time later, I try to remove them.
I do this with the following lines:
// part of a method, that searches the transceiver to stop and assigned it to the 'transceiver' variable
peerconnection.removeTrack(transceiver.sender);
if (transceiver.direction === "sendrecv") transceiver.direction = "recvonly";
else transceiver.direction = "inactive";
I know that this will set the remote track just to muted state and is like replacing the transceivers sender track with null, but Chrome has not implemented direction="stopped" or transceiver.stop() yet, see Issue 980879.
So what to do instead?
The remote peer just sees the current direction of the receiver becoming inactive and the receiver track gets muted.
Ergo, it does not remove the track for the remote peer in my application and muted tracks accumulate over time and even worse for video tracks: they get displayed like normal muted tracks.
I also cannot remove every muted track, since my application is required to allow muted and unmuted tracks (- muted by the user, not muted due to being stopped).
Muting tracks by entirely removing them will lead to another offer-answer exchange and this takes some time. I would prefer to just mute by replacing the track with null (without having to send sdp back and forth), as the WebRTC standard seems to allow and Chrome already implements.
The next option would be correlating the transceivers on both ends and sending a message from the stopping peer to the remote one to make the remote one stop the received track. This may be possible (I believe the transceivers mid should be the same, according to the spec), but in my opinion it is also an ugly way.
What i cannot do is just sending over the media track id, since this one is different for the stopping peer and the remote peer, so I cannot just send a message like 'please stop your track with id xyz' (it would make things easy, but well, it just doesn't work like this).
So, now I have the following questions:
What is the current 'standard' way in which you and other webrtc developers solve this, as long stopping transceivers will not work in every browser? (chrome fixing it for current versions will probably take some time but being the most used browser, we cannot just ignore it)
Did someone somehow (by abusing DTFM, using magic or whatever...) bodged a polyfill together? (I believe Adapter.js hasn't made this somehow possible)
Is there another way besides muting=removing and sending remote-track-stop messages over the signaler? If not, what is the better option?
What I ended up doing is choosing the signaling-solution.
I basically do
// part of a method, that searches the transceiver to stop and assigned it to the 'transceiver' variable
peerconnection.removeTrack(transceiver.sender);
if (transceiver.direction === "sendrecv") transceiver.direction = "recvonly";
else transceiver.direction = "inactive";
signaler.send({type: 'transceiver:stop', data: transceiver.mid});
And on the other side
// received sent message and called this method with message.data
onTransceiverStopMessage(mid){
const transceiverToStop = peerconnection.getTransceivers().filter(transceiver => transceiver.mid === mid);
transceiverToStop.receiver.track.stop()
transceiverToStop.receiver.track.dispatchEvent(new Event('ended'));
}
The dispatchEvent-Call is necessary, since calling stop on a track does not trigger the ended-Event.
It works well for my purpose, but I am still waiting for a solution to the piling up inactive transceivers (since they still impact performance, as the chrome issue states)

Consistent Empty Data using MediaRecorderAPI, intermittently

I have a simple setup for Desktop Capturing using html5 libraries.
This includes a simple webpage and a chrome-extension. I am using
Extension to get the sourceId
Using the sourceId I call navigator.mediaDevices.getUserMedia to get the MediaStream
This MediaStream is then fed into an instance of MediaRecorder for recording.
This setup works most of the times, but a few times I see that requestData() on MediaRecorder instance returns blob with empty data consistently. I am clueless as to what can cause a running setup to start misbehaving sometimes.
Some weird behaviour that I noticed in the bad state:
When I try to close/refresh the window it doesn't respond.
The MediaStreamTrack object in Step 2) above is 'live' but as soon as I go to Step 3) it becomes 'muted'.
There's no pattern to it, sometimes it even happens when I request for the MediaStreams the very first time(which rules out the possibility that there could be some dangling resources eating up the contexts)
Is there anything that I am doing wrong and am unaware of? Any help/pointers would be highly appreciated!

Red5 2-way camera setup (videochat)

I got a problem with RED5 in combination with Flash. For a personal project i am trying to make a Skype-like application. I already got a application that records the users webcam and saves it with a custom filename on the RED5 server.
But i am stuck trying to connect one more user to that window for a video chat. I made a new video container in Flash but i don't know how to connect a second client to the same stream in AS3 with Red5?
I searched on the net but i only get really old threads about RED5 in combination with Flex.
Maybe this is helping understanding my problem?
Could someone help me out? Or get me in the right direction?
Video chat? You will need 2 streams, for every client. Inbound and outbound. Outbound is a stream from the client to the media server, inbound is consumed stream of another user. So it will look like:
_streamOut = new NetStream(connection, NetStream.CONNECT_TO_FMS);
_streamIn = new NetStream(connection, NetStream.CONNECT_TO_FMS);
_streamOut.addEventListener(NetStatusEvent.NET_STATUS, onStreamOutNetStatus);
_streamIn.addEventListener(NetStatusEvent.NET_STATUS, onStreamInNetStatus);
_streamOut.attachAudio(microphone);
_streamOut.attachCamera(camera);
_streamOut.publish(hostPeerID);
_streamIn.play(companionPeerID);
Also there are some helpful examples, did you check them?

Captured audio buffers are all silent on Windows Phone 8

I'm trying to capture audio using WASAPI. My code is largely based on the ChatterBox VoIP sample app. I'm getting audio buffers, but they are all silent (flagged AUDCLNT_BUFFERFLAGS_SILENT).
I'm using Visual Studio Express 2012 for Windows Phone. Running on the emulator.
I had the exact same problem and managed to reproduce it in the ChatterBox sample app if I set Visual Studio to native debugging and at any point stepped through the code.
Also, closing the App without going through the "Stop" procedure and stopping the AudioClient will require you to restart the emulator/device before being able to capture audio data again.
It nearly drove me nuts before I figured out the before mentioned problems but I finally got it working.
So..
1. Be sure to NOT do native debugging
2. Always call IAudioClient->Stop(); before terminating the App.
3. Make sure you pass the correct parameters to IAudioClient->Initialize();
I've included a piece of code that works 100% of the time for me. I've left out error checking for clarity..
LPCWSTR pwstrDefaultCaptureDeviceId =
GetDefaultAudioCaptureId(AudioDeviceRole::Communications);
HRESULT hr = ActivateAudioInterface(pwstrDefaultCaptureDeviceId,
__uuidof(IAudioClient2), (void**)&m_pAudioClient);
hr = m_pAudioClient->GetMixFormat(&m_pwfx);
m_frameSizeInBytes = (m_pwfx->wBitsPerSample / 8) * m_pwfx->nChannels;
hr = m_pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED,
AUDCLNT_STREAMFLAGS_NOPERSIST | AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
latency * 10000, 0, m_pwfx, NULL);
hr = m_pAudioClient->SetEventHandle(m_hCaptureEvent);
hr = m_pAudioClient->GetService(__uuidof(IAudioCaptureClient),
(void**)&m_pCaptureClient);
And that's it.. Before calling this code I've started a worker thread that will listen to m_hCaptureEvent and call IAudioCaptureClient->GetBuffer(); whenever the capture event is triggered.
Of course using Microsoft.XNA.Audio.Microphone works fine to, but it's not always an option to reference the XNA framework.. :)
It was a really annoying problem which waste about 2 complete days of mine.My problem was solved by setting AudioClientProperties.eCatagory to AudioCategory_Communications instead of AudioCategory_Other.
After this long try and error period I am not sure that the problem won't repeat in the future because the API doesn't act very stable and every run may return a different result.
Edit:Yeah my guess was true.Restarting the wp emulator makes the buffer silent again.But changing the AudioClientProperties.eCatagory back to AudioCategory_Other again solve it.I still don't know what is wrong with it and what is the final solution.
Again I encounter the same problem and this time commenting (removing) the
properties.eCategory = AudioCategory_Communications;
solve the problem.
I can add my piece of advice for Windows Phone 8.1.
I made the following experiment.
Open capture device. Buffers are not silent.
Open render device with AudioDeviceRole::Communications. Buffers immediately go silent.
Close render device. Buffers are not silent.
Then I opened capture device with AudioDeviceRole::Communications and capture device works fine all the time.
For Windows 10 capture device works all the time, no matter if you open it with AudioDeviceRole::Communications or not.
I've had the same problem. It seems like you can either use only AudioCategory_Other or create an instance of VoipPhoneCall and use only AudioCategory_Communications.
So the solution in my case was to use AudioCategory_Communications and create an outgoing VoipPhoneCall. You should implement the background agents as in Chatterbox VoIP sample app for the VoipCallCoordinator to work .

IIS Web server - How do I limit one login per user for video streaming

I am developing an IIS8 website where users can log in and once logged in watch videos. I want to serve my videos, which may be up to 6 hours long, using HTML5 <video />. I want to limit users to one login only so they can not share their login credentials with others, this will be a pay site. For static pages (non-video) this is easy, but I'm not sure how to do it for video. Once a HTML5 video starts playing how can I prevent the user from loggin in again? Or, if he does log in again, it there a way to interrupt the video playback from the 1st login? I'd love to hear ideas...
thanks
David
While I'm not sure enough about your framework and criteria, have you considered using jQuery and aJax to send requests to your server every X minutes to verify that the session is still active? Here is some sample untested ajax -- just use Javascript's setTimeout function to send every x minutes:
$.ajax({
url: "yourpage.asp",
type: "post",
data: "sessionId=sessionIdVal",
dataType: "json",
success: function(msg){
//Do Logic Here with msg
}
});
And documentation.
Good luck.
elaborating on sgeddes's answer a solution I've used in the past is
- when the user logs in generate a GUID that gets stored both as a cookie for the user and in a lookup table... so if the user logs in again the GUID (and timestamp) will change
- every n seconds have the client poll the server to make sure they still have the right GUID and if not stop the video so only the most recent log-in continues to work
however... there are a couple of issues with this. If the video is a single progressive stream once connected there's nothing to stop the user downloading it and watching it locally, and if they want to get really tricky they could interrupt the polling javascript and simply stop the check.
If you want to go a bit further you can implement a server side handler that checks the validity of the cookie token before delivering more of the video (something similar to http://blog.offbeatmammal.com/post/2006/06/30/Using-ASPNET-to-restrict-access-to-images.aspx) though because the <video> element does it's best to download enough of the video to play without buffering depending on how long it is and how much pre-buffering is done by the time that check kicks in it may be too late.