I have one application based on webrtc. Currently I need to capture the system audio(by wasapi),but the mixed captured audio contains the audio stream which is my application's, if I send this audio stream to peer, he would listen echo.
The article Audio and Video / Core Audio APIs / Stream Management / Loopback Recording says
WASAPI provides loopback mode primarily to support acoustic echo cancellation (AEC).
How to understand it? How to clear the audio which is produced by my application?
In other words, I find that the chrome doesn't have this issue when I call the "getdisplaymedia", the captured audio stream doesn't contain audio which is produced by chrome.
The quoted statement means that in order to remove echo it is useful and necessary to have access to audio signal played through speakers because it will come back to michrophone or other audio recording hardware and would create an echo problem unless it is effectively subtracted.
Windows provides API to get this mixed audio signal going out to audio output devices: "Loopback Recording".
Also, Windows provides another software component and API: Voice Capture DSP:
The voice capture DMO includes the following DSP components:
Acoustic echo cancellation (AEC)
...
Currently the voice capture DMO supports only single-channel acoustic echo cancellation (AEC), so the output from the speaker line must be single-channel. If microphone array processing is disabled, multi-channel input is folded down to one channel for AEC processing. If both microphone array processing and AEC processing are enabled, AEC is performed on each microphone element before microphone array processing.
Together those can be used to capture audio and address echo cancellation challenge.
AECMicArray sample gives you some code and further information:
The sample supports acoustic echo cancellation (AEC) and microphone array processing by using the AEC DMO, also called the Voice capture DSP, provided by Microsoft .
Related
here is my question:
I use Android Apis "MediaMuxer" to combine h264 stream and aac stream to mp4 file,when I want to stop record,so I call this:mMediaMuxer.stop(),and the mp4 file can play well.
but sometimes,there is an unexpected happen,like kill power or power is gone suddendly,so there is no time to call "mMediaMuxer.stop()",finally this file can not play anyway....
Is anybody know now to fix this problem? I want to play this video event didn't call "mMediaMuxer.stop()" this method... or there is other Apis or sdk can combine h264+aac stream well?
Is is possible to receive both video and audio from another peer if the peer who called createOffer() only allowed audio when requested via getUserMedia()?
Explanation by scenario:
Alice connects to a signalling server, and when getUserMedia() is called, chooses to share both video and audio.
Bob connects to the signalling server, and when getUserMedia() is called, only shares audio.
As Bob is the last to party, Bob creates the peer connection offer via RTCPeerConnection.createOffer(). He shares his localDescription which contains SDP data that does not mention video.
The resultant connection is audio-only as the SDP data only contained audio-related information.
Can an offer be created that asks to receive video data without sharing it?
So the key was in the creation of the offer.
Referring to the Specification
The WebRTC 1.0 specification says:
4.2.5 Offer/Answer Options
These dictionaries describe the options that can be used to control the offer/answer creation process.
dictionary RTCOfferOptions {
long offerToReceiveVideo;
long offerToReceiveAudio;
boolean voiceActivityDetection = true;
boolean iceRestart = false;
};
In the case of video:
offerToReceiveVideo of type long
In some cases, an RTCPeerConnection may wish to receive video but not send any video. The RTCPeerConnection needs to know if it should signal to the remote side whether it wishes to receive video or not. This option allows an application to indicate its preferences for the number of video streams to receive when creating an offer.
Solution
RTCPeerConnection.createOffer() can take MediaConstraints as an optional third parameter.
The example I found was from the WebRTC for Beginners article:
Creating Offer SDP
peerConnection.createOffer(function (sessionDescription) {
peerConnection.setLocalDescription(sessionDescription);
// POST-Offer-SDP-For-Other-Peer(sessionDescription.sdp, sessionDescription.type);
}, function(error) {
alert(error);
}, { 'mandatory': { 'OfferToReceiveAudio': true, 'OfferToReceiveVideo': true } });
These MediaContraints can also be used with createAnswer().
Bob's offer will contain audio, but alice will also share her video.
When Bob later wishes to add (video) streams he calls RtcPeerConnection.addStream() and a (re-)negotiation is needed (see negotiationneeded event). This will allow Bob to add different (additional video) or additional streams at any time he wishes. You just have to make sure that on the offer/answer will be exchanged correctly (e.g. at negotiationneeded event).
I wrote a (dart based) webrtc library that might help you to see how it works. See Sender and Receiver
I'm trying to get the raw audio in getUserMedia() success callback and post it to the server.
The success callback receives the LocalMediaStream object.
var onSuccess = function(s) {
var m=s.getAudioTracks(s);
//m[0] contains MediaStreamTrack object for audio
//get the raw audio and do the stuff
}
But there is no attribute or method to get the raw audio from channels in MediaStreamTrack.
How can we access the raw audio into this callback which is called on success of getUserMedia()?
I found the Recorder.js library-- https://github.com/mattdiamond/Recorderjs
But it is recording blank audio in Chrome: Version 26.0.1410.64 m.
It works fine on Chrome: Version 29.0.1507.2 canary SyzyASan.
I think there is issue of Web Audio API used by recorder.js
I'm looking for the solution without Web Audio API, that should work at least on chrome's official build.
Check out the MediaStreamAudioSourceNode. You can create one of those (via the AudioContext's createMediaStreamSource method) and then connect the output to RecorderJS or a plain old ScriptProcessorNode (which is what RecorderJS is built on).
Edit: Just realized you're asking if you can access the raw audio samples without the Web Audio API. As far as I know, I don't think that's possible.
This question is the follow up question to this thread: AR Drone 2 and ffserver + ffmpeg streaming
We are trying to get a stream from our AR Drone through a Debian server and into a flash application.
The big picture looks like this:
AR Drone --> Gstreamer --> CRTMPServer --> Flash Application
We are using the PaveParse plugin for Gstreamer found in this thread: https://projects.ardrone.org/boards/1/topics/show/4282
As seen in the thread the AR Drone is using PaVE, Parrot Video Ecapsulation, which is unrecognizable by most players like VLC. The PaVeParse plugin removes these.
We have used different pipelines and they all yield the same error.
Sample pipeline:
GST_DEBUG=3 gst-launch-0.10 tcpclientsrc host=192.168.1.1 port=5555 ! paveparse ! queue ! ffdec_h264 ! queue ! x264enc ! queue ! flvmux ! queue ! rtmpsink localtion='rtmp://0.0.0.0/live/drone --gst-plugin-path=.
The PaVEParse plugin needs to be located at the gst-plugin-path for it to work.
A sample error output from Gstreamer located in the ffdec_h264 element can be found at: http://pastebin.com/atK55QTn
The same thing will happen if the decoding is taking place in the player/dumper e.g. VLC, FFplay, RTMPDUMP.
The problem comes down to missing headers: PPS Reference is non-existent. We know that the PaVEParse plugin removes PaVE headers but we suspect that when these are removed there are no H264 headers for the decoder/player to identify the frame by.
Is it possible to "restore" these H264 headers either from scratch or by transforming the PaVE headers?
Can you please share a sample of the traffic between gstreamer and crtmpserver?
You can always use the LiveFLV support built inside crtmpserver. Here are more details:
Re-Stream a MPEG2 TS PAL Stream with crtmpserver
I am trying to record the sound of my device connected to Line-In via ActionScript 3.
According to adobe docs ( http://livedocs.adobe.com/flex/3/html/help.html?content=Working_with_Sound_02.html ), The Microphone class lets your application connect to a microphone or other sound input device on the user's system .
But the Microphone class detects only microphones on my sound card ( Microphone.names array ) , not the "other sound input devices". Maybe there is another way to capture sound from Line-In devices?
Thank you!
Flash is built with security in mind, it won't let you access any hardware except predefined classes like Microphone and Camera (and only after user permission!)
You may have better luck with plugging the device into microphone socket or reroute its signal programmatically, if your soundcard software allows it.