Why zoneminder monitor function utilizing camera constantly even when no one is monitoring - function

My IP camera has a LED which blink when someone is streaming from it. When I add the camera to zoneminder as monitor function, the LED keeps blinking constantly meaning zoneminder is streaming from it. It also keep blinking even when nobody is using it. I mean no one is using web site to watch the camera nor anybody is using any API to utilize it. The problem is, it's using network bandwidth constantly. It would make sense if it was on mocap or record function but monitor function doesn't need to stream unless someone is watching it. Wouldn't it be better to only read from camera when somebody is using it (app or website)? Is there any option that I'm missing?

Zoneminder convert/relay any stream from mjpeg or h264 to jpg images, so need consume the stream all the time. If you really want close the daemon, disable the camera, and so on.

Related

WebXR and WebRTC don't work simultaneously

I am new to WebXR. I was trying to use webRTC with WebXR. The user will first enter into AR session and then create a WebRTC peer connection but ice candidates are not generated in Chrome for Android of the user is in AR session. As soon as the user gets out of AR session, ice candidates are transferred. Is this a bug in Chrome??
The problem is Hardware related. Some devices allow the use of both Front and back camera simultaneously. In such devices, the code worked properly. In other devices, both front and back camera cannot be accessed simultaneously. Hence, code does not work in these devices. Also, the WebXR Device API does not allow access to camera feed at the moment, however it is a proposed feature.
Although I haven't tried it myself. But you can in theory use the canvas captureStream API to stream webXR canvas.
Can you post your code here. You might want to tweak how you pass stream to the webrtc connection.
As far as I know, it is not possible to use canvas.captureStream() because WebXR doesn't render to the canvas directly.
I am also looking for a way to stream a webXR-Session via WebRTC. So I would be highly interested in your solution shivamag00!
Hope to hear from you!

Is it possible to pass raw video frame TO a browser?

Is possible to pipe raw video frames to a browser/website? For instance the decoding could be done locally in Gstreamer, and then that could be forwarded somehow to a browser.
EDIT:
I realize that my description was a bit shaky. The use case I would like to have is to send encoded video to someone, decode it on their computer, do some advance filtering that cannot be done in the browser, and then pipe the frames to the browser. Obviously re-encoding it would just be a waste of time and energy.
All that I can find is ppl saying that video frames can be grabbed FROM a browser, no-one seems to be interested to SENT TO a browser. The horrible option could be to use webrtc and to re-encode the frames into VP8 and then to send it to the browser.
So my final question is whether it is possible to write to the rendering pipeline of a browser? I know next to nothing about web programming, I usually just deal with images and video.
Thank you for your support :)
PS: forgive my lack of knowledge, is it possible to have a client on someone's computer, writting to a local tcp port, and to access that tcp port from a website in the browser? (potentially asking the user to allow the connection?)
Yes, this is possible. Since you're running a local GStreamer pipeline, you might look into this project: https://github.com/Samsung/ChromiumGStreamerBackend Basically, they're using GStreamer as the native renderer in-browser.
Aside from that, you can create a browser extension which executes an application and gets data from Gstreamer, to shuffle to your application. https://developer.chrome.com/extensions/nativeMessaging
If you don't want to make an extension, you can instead create a small Web Socket server.
Either way, you can write the raw pixel data to a Canvas... no need to re-encode/decode the video. https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API

Record videoconference application to flv

I've spent plenty of time solving this problem, but it looks like I need some help. I have a web conference application which provides ability to stream live video, chat, share documents, draw on a whiteboard, share desktop, etc. And now I want to record everything that happens in taken separately so called webinar, including video and sound. So I'm looking for tools that can help with this goal.
Here's input data:
This is Adobe Flash based application
Using wowza server
Everything should be recorded on server
Many webinars can be in recording mode at the same time
Record should be represented in video (flv, mp4 or whatever)
What I've done so far and what I problems I have:
I have implemented recording on server side. But this is not a video, this is just a list of commands to recreate passed webinar. It works, but has lot's of limitations and problems with rewinding.
And now I'm testing this FLV Encoding library. I created AIR application that starts on server when record is needed, connects to taken webinar and takes screenshots from itself with BitmapData.draw() method. Works pretty neat, but has some limitation that I'm looking help with:
First of all, this is sound problem. I have no idea how to catch all
sounds from all sources in flash. So far from my tests and googling I conclude that SoundMixer.computeSpectrum() won't help me to do this. Maybe this can be done on server side by mixing all streams on the right time but I think this can lead to synchronization problems and I prefer to capture sound on client. Maybe there is way to capture audio byte array from rtmp stream somehow?
Security problems. We have 2 kinds of them. First ones are with streaming videos. BitmapData.draw() method throws exeptions even after adding <StreamAudioSampleAccess>true</StreamAudioSampleAccess>
<StreamVideoSampleAccess>true</StreamVideoSampleAccess> on server. There are lots of posts about this problem and no good solution.
But more complex problem is that YouTube videos can be opened in webinar using api player. And in this situation I have no idea how to resolve security problem. Maybe someone knows a way or workaround to use BitmapData.draw() on YouTube AS3 player?
Or maybe there is another good way to solve my recording issue?
Free Apache Openmeetings conferencing [1] has a java recording application inside which should work in 3.0 release. Just use it.
[1] http://openmeetings.apache.org/

In AS3 how can you detect if someone pulls their camera out of the USB port?

In AS3 if the SWF gets a hold of someone's camera successfully and they start streaming video across and everything, but then mid-stream, either they accidentally wiggle their camera out of the USB port, or the camera just sort of breaks down, or something else like that, how could you detect it from that user's side? I've tried using event listeners and also polling different variables every five seconds, but neither has worked; none of the public properties of Camera or its events seem to act funny at all when something like that happens. And apparently you can't just keep scanning the computer for devices (for good reason, I guess).
Is there something I'm missing here? Is there a way to detect from a user's copy of a SWF (FP or AIR, but much more importantly FP) when their camera has effectively stopped as the result of something going wrong, such as them wiggling it out of the computer by mistake? If so how? Thanks!
While you may have difficulty in detecting when the camera/microphone stops working or is deactivated, you can discern that something has gone wrong if you are publishing the video/audio to a server with a NetStream.
The NetStream has an info property which is a NetStreamInfo object. It will give you both a running total of bytes as well as a rate of bytes/second of data that the NetStream is sending to the server.
Running totals: byteCount, audioByteCount, videoByteCount
current rate in bytes/second: currentBytesPerSecond, audioBytesPerSecond, videoBytesPerSecond
If you use the running totals, you need to periodically check the byteCount and calculate your own rate. Or you can let Flash Player do all the work and use the rates that it's calculating. In the case of recording, these values give you an indication of how much data the NetStream is receiving from the camera/microphone (and going to send to the server).
We found that we could reliably determine on the client that something had gone wrong when the rate fell below 5 kilobits/second. We used the same threshold and similar calculations on an FMS server (w/custom server side Actionscript) as well.
I don't recall a proper "get camera status" call you can make on demand but you can try listening for the status event and hope there's one fired on disconnect.
If you haven't already done so, on you 5 second check try: if(myCameraObject == null) assuming var myCameraObject = Camera.GetCamera();
If you can't find a better solution, consider placing a "Detect camera" button behind the camera feed. If the camera disconnects then the user would see the button and could click it to reconnect.
You can check if the camera object is null as suggested by #ToddBFisher, check the Camera.names.length>0 or a few other properties of the camera instance (see the links below). But in each of them you'll want to check it at a regular interval.
Working with Cameras
Monitoring Camera Status

Post-processing captured video in AS3, creating slow motion

I have an interesting project wherein I need to allow users to capture video of themselves with a webcam at a kiosk, after which I email them a link to their video. The trick is the resulting video needs to be a 'slow motion' version of the captured video. So for example, if someone creates a 2 minute movie, the resulting movie will be 4 minutes.
I'd like to build this in Flex / AS3 if possible. I don't have issues capturing the video and storing it / generating and emailing a link, but slowing down the video is the real mind bender. I'm unsure how to approach 'batch post-processing' a set of videos using Adobe tools.
Has anyone had a project similar to this or have suggestions on routes to take in order to do this?
Thanks!
-Josh
This is absolutely feasible from the client side, contrary to what some may believe. :)
http://code.google.com/p/flvrecorder/
Just adjust the capture rate, which shouldn't be too difficult all the source is there.
Alternatively, you could write an AIR app that launches Adobe Media Encoder after writing a file and launch it with a preset that has FTP info etc. Or you can just use the socket class to connect and upload over FTP.
http://code.google.com/p/fl-ftp/
It is not feasible to do this client-side.
Capture the video and send it to the server.
Use a library like FFMpeg to do your coneversions