I am using SimpleWebRTC to create a video chat room application.
One of the requirements is, a peer machine that has no microphone and webcam, should atleast be able to hear and see the video of other peers.
Is it possible to do?
I tried this using constraints{audio: false, video: false} in regular webRTC and it works on a machine that has no microphone and webcam.
How to accomplish this using simpleWebRTC?
Thanks!
I am not familiar with simpleWebRTC, but I use PeerJS. To answer even if you don't have a webcam or mic, all I do is I answer with 'null' instead of the stream.
So in Peerjs instead of
call.answer(window.localStream);
you can say
call.answer(null);
Try to find the code in SimpleWebRTC where the call is made or answered and try this.
Related
here is my question:
I use Android Apis "MediaMuxer" to combine h264 stream and aac stream to mp4 file,when I want to stop record,so I call this:mMediaMuxer.stop(),and the mp4 file can play well.
but sometimes,there is an unexpected happen,like kill power or power is gone suddendly,so there is no time to call "mMediaMuxer.stop()",finally this file can not play anyway....
Is anybody know now to fix this problem? I want to play this video event didn't call "mMediaMuxer.stop()" this method... or there is other Apis or sdk can combine h264+aac stream well?
Running into some issues using webrtc with an sfu. Occasionally we run into problems where the video track on an incoming stream is muted: true and readystate: "muted". Those are read only properties so I know I am not setting them anywhere. Are those properties determined by the browser (chrome) or are they explicitly set on the stream/track by the sfu? Any help is welcome, but I'd love to see some documentation somewhere about this. For reference, the sfu we are using is Jitsi Video Bridge.
I made simple WebRTC application and I think that I understand WebRTC framework. I want to add new feature to my app which allows only for one user share audio/video while the second user does not share video and audio.
How can I do this? What will be difference of standard mechanism?
For this there's no difference in the standard mechanism. When specifying the constraints to getUserMedia() you would specify either {video: false} or {audio: true}. I think you need at least audio, video or data to create an SDP offer. If not, then simply create an SDP answer without attaching any streams.
After connected you can simply disable the audio/video track of the stream using getAudioTracks()/getVideoTracks() of MediaStream. Each track has the enabled property which can be set to true or false.
Do not get MediaStream and do not call peerConnection.addStream() for the user whom you don't want to share his video/audio.
There is one condition if you do this.
You have to add this constraints to peerConnection.createOffer
Firefox:
{ offerToReceiveVideo: true, offerToReceiveAudio: true }
Chrome:
{mandatory: { OfferToReceiveAudio: true, OfferToReceiveVideo: true }}
You should give this constraints while creating offer because by default it will offer to receive only what stream you add.
To decide dynamically who will only send media and only receive media you can use SDP attribute a=sendonly, a=recvonly for corresponding media to signal this or negotiate this between two party.
I got a problem with RED5 in combination with Flash. For a personal project i am trying to make a Skype-like application. I already got a application that records the users webcam and saves it with a custom filename on the RED5 server.
But i am stuck trying to connect one more user to that window for a video chat. I made a new video container in Flash but i don't know how to connect a second client to the same stream in AS3 with Red5?
I searched on the net but i only get really old threads about RED5 in combination with Flex.
Maybe this is helping understanding my problem?
Could someone help me out? Or get me in the right direction?
Video chat? You will need 2 streams, for every client. Inbound and outbound. Outbound is a stream from the client to the media server, inbound is consumed stream of another user. So it will look like:
_streamOut = new NetStream(connection, NetStream.CONNECT_TO_FMS);
_streamIn = new NetStream(connection, NetStream.CONNECT_TO_FMS);
_streamOut.addEventListener(NetStatusEvent.NET_STATUS, onStreamOutNetStatus);
_streamIn.addEventListener(NetStatusEvent.NET_STATUS, onStreamInNetStatus);
_streamOut.attachAudio(microphone);
_streamOut.attachCamera(camera);
_streamOut.publish(hostPeerID);
_streamIn.play(companionPeerID);
Also there are some helpful examples, did you check them?
I would like to get the NetStream width/height when receiving a RTMFP stream. This is important because the video component needs different measures when, for example, the user receives a 4:3 or a 16:9 stream.
Unfortunately, the onMetaData callback for NetStream does not work as it does for RTMP streams.
Is there a workaround?
You might try using different ports and see if the onMetaData gives you anything different. I believe the 3 main ones are: 1935, 443 and 80.
The following link can give you further documentation on configuring your server:
http://help.adobe.com/en_US/flashmediaserver/configadmin/WSdb9a8c2ed4c02d261d76cb3412a40a490be-8000.html