I made simple WebRTC application and I think that I understand WebRTC framework. I want to add new feature to my app which allows only for one user share audio/video while the second user does not share video and audio.
How can I do this? What will be difference of standard mechanism?
For this there's no difference in the standard mechanism. When specifying the constraints to getUserMedia() you would specify either {video: false} or {audio: true}. I think you need at least audio, video or data to create an SDP offer. If not, then simply create an SDP answer without attaching any streams.
After connected you can simply disable the audio/video track of the stream using getAudioTracks()/getVideoTracks() of MediaStream. Each track has the enabled property which can be set to true or false.
Do not get MediaStream and do not call peerConnection.addStream() for the user whom you don't want to share his video/audio.
There is one condition if you do this.
You have to add this constraints to peerConnection.createOffer
Firefox:
{ offerToReceiveVideo: true, offerToReceiveAudio: true }
Chrome:
{mandatory: { OfferToReceiveAudio: true, OfferToReceiveVideo: true }}
You should give this constraints while creating offer because by default it will offer to receive only what stream you add.
To decide dynamically who will only send media and only receive media you can use SDP attribute a=sendonly, a=recvonly for corresponding media to signal this or negotiate this between two party.
Related
I currently try to build a webrtc application that uses the RTCRtpTransceiver-Objects of the WebRTC standard. I add video and audio tracks to the connection and then, some time later, I try to remove them.
I do this with the following lines:
// part of a method, that searches the transceiver to stop and assigned it to the 'transceiver' variable
peerconnection.removeTrack(transceiver.sender);
if (transceiver.direction === "sendrecv") transceiver.direction = "recvonly";
else transceiver.direction = "inactive";
I know that this will set the remote track just to muted state and is like replacing the transceivers sender track with null, but Chrome has not implemented direction="stopped" or transceiver.stop() yet, see Issue 980879.
So what to do instead?
The remote peer just sees the current direction of the receiver becoming inactive and the receiver track gets muted.
Ergo, it does not remove the track for the remote peer in my application and muted tracks accumulate over time and even worse for video tracks: they get displayed like normal muted tracks.
I also cannot remove every muted track, since my application is required to allow muted and unmuted tracks (- muted by the user, not muted due to being stopped).
Muting tracks by entirely removing them will lead to another offer-answer exchange and this takes some time. I would prefer to just mute by replacing the track with null (without having to send sdp back and forth), as the WebRTC standard seems to allow and Chrome already implements.
The next option would be correlating the transceivers on both ends and sending a message from the stopping peer to the remote one to make the remote one stop the received track. This may be possible (I believe the transceivers mid should be the same, according to the spec), but in my opinion it is also an ugly way.
What i cannot do is just sending over the media track id, since this one is different for the stopping peer and the remote peer, so I cannot just send a message like 'please stop your track with id xyz' (it would make things easy, but well, it just doesn't work like this).
So, now I have the following questions:
What is the current 'standard' way in which you and other webrtc developers solve this, as long stopping transceivers will not work in every browser? (chrome fixing it for current versions will probably take some time but being the most used browser, we cannot just ignore it)
Did someone somehow (by abusing DTFM, using magic or whatever...) bodged a polyfill together? (I believe Adapter.js hasn't made this somehow possible)
Is there another way besides muting=removing and sending remote-track-stop messages over the signaler? If not, what is the better option?
What I ended up doing is choosing the signaling-solution.
I basically do
// part of a method, that searches the transceiver to stop and assigned it to the 'transceiver' variable
peerconnection.removeTrack(transceiver.sender);
if (transceiver.direction === "sendrecv") transceiver.direction = "recvonly";
else transceiver.direction = "inactive";
signaler.send({type: 'transceiver:stop', data: transceiver.mid});
And on the other side
// received sent message and called this method with message.data
onTransceiverStopMessage(mid){
const transceiverToStop = peerconnection.getTransceivers().filter(transceiver => transceiver.mid === mid);
transceiverToStop.receiver.track.stop()
transceiverToStop.receiver.track.dispatchEvent(new Event('ended'));
}
The dispatchEvent-Call is necessary, since calling stop on a track does not trigger the ended-Event.
It works well for my purpose, but I am still waiting for a solution to the piling up inactive transceivers (since they still impact performance, as the chrome issue states)
Running into some issues using webrtc with an sfu. Occasionally we run into problems where the video track on an incoming stream is muted: true and readystate: "muted". Those are read only properties so I know I am not setting them anywhere. Are those properties determined by the browser (chrome) or are they explicitly set on the stream/track by the sfu? Any help is welcome, but I'd love to see some documentation somewhere about this. For reference, the sfu we are using is Jitsi Video Bridge.
Is is possible to receive both video and audio from another peer if the peer who called createOffer() only allowed audio when requested via getUserMedia()?
Explanation by scenario:
Alice connects to a signalling server, and when getUserMedia() is called, chooses to share both video and audio.
Bob connects to the signalling server, and when getUserMedia() is called, only shares audio.
As Bob is the last to party, Bob creates the peer connection offer via RTCPeerConnection.createOffer(). He shares his localDescription which contains SDP data that does not mention video.
The resultant connection is audio-only as the SDP data only contained audio-related information.
Can an offer be created that asks to receive video data without sharing it?
So the key was in the creation of the offer.
Referring to the Specification
The WebRTC 1.0 specification says:
4.2.5 Offer/Answer Options
These dictionaries describe the options that can be used to control the offer/answer creation process.
dictionary RTCOfferOptions {
long offerToReceiveVideo;
long offerToReceiveAudio;
boolean voiceActivityDetection = true;
boolean iceRestart = false;
};
In the case of video:
offerToReceiveVideo of type long
In some cases, an RTCPeerConnection may wish to receive video but not send any video. The RTCPeerConnection needs to know if it should signal to the remote side whether it wishes to receive video or not. This option allows an application to indicate its preferences for the number of video streams to receive when creating an offer.
Solution
RTCPeerConnection.createOffer() can take MediaConstraints as an optional third parameter.
The example I found was from the WebRTC for Beginners article:
Creating Offer SDP
peerConnection.createOffer(function (sessionDescription) {
peerConnection.setLocalDescription(sessionDescription);
// POST-Offer-SDP-For-Other-Peer(sessionDescription.sdp, sessionDescription.type);
}, function(error) {
alert(error);
}, { 'mandatory': { 'OfferToReceiveAudio': true, 'OfferToReceiveVideo': true } });
These MediaContraints can also be used with createAnswer().
Bob's offer will contain audio, but alice will also share her video.
When Bob later wishes to add (video) streams he calls RtcPeerConnection.addStream() and a (re-)negotiation is needed (see negotiationneeded event). This will allow Bob to add different (additional video) or additional streams at any time he wishes. You just have to make sure that on the offer/answer will be exchanged correctly (e.g. at negotiationneeded event).
I wrote a (dart based) webrtc library that might help you to see how it works. See Sender and Receiver
I am using SimpleWebRTC to create a video chat room application.
One of the requirements is, a peer machine that has no microphone and webcam, should atleast be able to hear and see the video of other peers.
Is it possible to do?
I tried this using constraints{audio: false, video: false} in regular webRTC and it works on a machine that has no microphone and webcam.
How to accomplish this using simpleWebRTC?
Thanks!
I am not familiar with simpleWebRTC, but I use PeerJS. To answer even if you don't have a webcam or mic, all I do is I answer with 'null' instead of the stream.
So in Peerjs instead of
call.answer(window.localStream);
you can say
call.answer(null);
Try to find the code in SimpleWebRTC where the call is made or answered and try this.
Is it possible for a Chrome extension to listen for streaming audio from any of the browser's tabs? I would like to capture the streaming audio data and then analyse it.
Thanks
You could try 3 ways, neither one does provide 100% guarantee to meet your needs.
Before going into more detailed descriptions, I must note that Chrome extensions do not provide convenient tools for working on per connection level - sufficiently low level, required for stream capturing. This is by design. This is why the 1-st way is:
To look at other browsers, for example Firefox, which provides low-level APIs for connections. They are already known to be used by similar extensions. You may have a look at MediaStealer. If you do not have a specific requirement to build your system on Chrome, you should possibly move to Firefox.
You can develop a Chrome extension, which intercepts HTTP-requests by means of webRequest API, analyses their headers and extracts media urls (such as containing audio/mpeg MIME-type, for example, in HTTP-headers). Just for a quick example of code you make look at the following SO question - How to change response header in Chrome. Having the url you may force appropriate media download as a file. It will land in default downloads folder and may have unfriendly name. (I made such an extension, but I do not have requirements for further processing). If you need to further process such files, it can be a challenge to monitor them in the folder, and run additional analysis in a separate program.
You may have a look at NPAPI plugins in general, and their streaming APIs in particular. I can imagine that you create a plugin registered for, again, audio/mpeg MIME-type, and receives the data via NPP_NewStream, NPP_WriteReady and NPP_Write methods. The plugin can be wrapped into a Chrome extension. Though I made NPAPI plugins, I never used this API, and I'm not sure it will work as expected. Nethertheless, I'm mentioning this possibility here for completenees. This method requires some coding other than web-coding, meaning C/C++. NB. NPAPI plugins are deprecated and not supported in Chrome since September 2015.
Taking into account that you have some external (to the extension) "fingerprinting service" in mind, which sounds like an intelligent data processing, you may be interested in building all the system out of a browser. For example, you could, possibly, involve a HTTP-proxy, saving media from passing traffic.
If you're writing a Chrome extension, you can use the Chrome tabCapture API to record audio.
chrome.tabCapture.capture({audio: true}, function(stream) {
var recorder = new MediaRecorder(stream);
[...]
});
The rest is left as an exercise to the reader; MDN has more documentation on how to use MediaRecorder.
When this question was asked in 2013, neither chrome.tabCapture nor MediaRecorder existed.
Mac OSX solution using soundflower: http://rogueamoeba.com/freebies/soundflower/
After installing soundflower it should appear as a separate audio device in the sound preferences (apple > system preferences > sound). Divert the computer's audio to the 2ch option (stereo, 16ch is surround), then inside a DAW, such as 'audacity', set the audio input as soundflower. Now the sound should be channeled to your DAW ready for recording.
Note: having diverted the audio from the internal speakers to soundflower you will only be able to hear the audio if the 'soundflowerbed' app is actually open. You know it's open if there's a 8 legged blob in the top right task bar. Clicking this icon gives you the sound flower options.
My privoxy has the following log:
2013-08-28 18:25:27.953 00002f44 Request: api.audioaddict.com/v1/di/listener_sessions.jsonp?_method=POST&callback=_AudioAddict_WP_ListenerSession_create&listener_session%5Bid%5D=null&listener_session%5Bis_premium%5D=false&listener_session%5Bmember_id%5D=null&listener_session%5Bdevice_id%5D=6&listener_session%5Bchannel_id%5D=178&listener_session%5Bstream_set_key%5D=webplayer&_=1377699927926
2013-08-28 18:25:27.969 0000268c Request: api.audioaddict.com/v1/ping.jsonp?callback=_AudioAddict_WP_Ping__ping&_=1377699927928
2013-08-28 18:25:27.985 00002d48 Request: api.audioaddict.com/v1/di/track_history/channel/178.jsonp?callback=_AudioAddict_TrackHistory_Channel&_=1377699927942
2013-08-28 18:25:54.080 00003360 Request: pub7.di.fm/di_progressivepsy_aac?type=.flv
So I got the stream url and record it:
D:\Profiles\user\temp>wget pub7.di.fm/di_progressivepsy_aac?type=.flv
--18:26:32-- http://pub7.di.fm/di_progressivepsy_aac?type=.flv
=> `di_progressivepsy_aac#type=.flv'
Resolving pub7.di.fm... done.
Connecting to pub7.di.fm[67.221.255.50]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [video/x-flv]
[ <=> ] 1,234,151 8.96K/s
I got the file that can be reproduced in any multimedia pleer.