WebRTC - disable all audio processing - html

I'm currently trying to get a clean as possible audio channel via webrtc. Via the getUserMedia mediaconstraints object, I've set the following options:
constraints: {
audio: {
mandatory: {
echoCancellation: false,
googEchoCancellation: false,
googAutoGainControl: false,
googAutoGainControl2: false,
googNoiseSuppression: false,
googHighpassFilter: false,
googTypingNoiseDetection: false,
//googAudioMirroring: false // For some reason setting googAudioMirroring causes a navigator.getUserMedia error: NavigatorUserMediaError
}
},
video: false
},
This greatly improves the audio quality, but there still seems to be audio processing present which causes the mutilation of the audio in the form of high frequency noise with some of the test samples.
There is a Chrome flag --use-file-for-fake-audio-capture as described at http://peter.sh/experiments/chromium-command-line-switches/#use-file-for-fake-audio-capture which allows input via file for testing. As mentioned in the description of the flag, all audio processing has to be disabled or the audio will be distorted - so there seems to be additional options for this purpose.
I also tried the --disable-audio-track-processing --audio-buffer-size=16 --enable-exclusive-audio Chrome flags, but still there seems to be some audio processing.
Is there any way to disable the still present audio processing (preferably via JS API)?

I'd wager that the variable bitrate (default) behavior of the opus codec is causing some compression or adjustment. You could manually mangle the SDP offer to use CBR (constant bitrate) instead of VBR (variable bit rate). When you get the SDP offer from the browser, change the line:
a=fmtp:111 minptime=10; useinbandfec=1
to:
a=fmtp:111 minptime=10; cbr=1
Note that I'm both adding cbr=1 and removing useinbandfec=1. I'm not positive that dropping useinbandfec is necessary, but it seems that in-band FEC (forwarding error correction) causes compression adjustment which you'd want to avoid as well.

This is the updated way to disable audio processing and get a clean signal:
navigator.mediaDevices.getUserMedia({
audio: {
autoGainControl: false,
channelCount: 2,
echoCancellation: false,
latency: 0,
noiseSuppression: false,
sampleRate: 48000,
sampleSize: 16,
volume: 1.0
}
});
If you are streaming audio via WebRTC, it defaults to radio or phone quality audio optimized for voice. So make sure your SDP has stereo and maxaveragebitrate params:
a=fmtp:111 minptime=10;useinbandfec=1; stereo=1; maxaveragebitrate=510000

Related

Cannot replay MP3 in Firefox using MediaSource even though it works in Chrome

I have implemented a simple audio player in my web application and noticed that it is not working in Firefox (let me just ... 🎉).
What I get is an error:
ERROR DOMException: MediaSource.addSourceBuffer: Type not supported in MediaSource
This is followed by a warning:
Cannot play media. No decoders for requested formats: audio/mpeg
This is the implementation for the sourceopen event handler:
private onSourceOpen = (e) => {
this.logger.debug('onSourceOpen');
if (!this.sourceBuffer) {
this.sourceBuffer = this.mediaSource.addSourceBuffer('audio/mpeg');
}
this.mediaSource.removeEventListener('sourceopen', this.onSourceOpen);
this.fetchRange(this.trackPlayUrl, 0, this.segmentLength, (chunk) => this.appendSegment(chunk));
}
Where
// Create the media source object
this.mediaSource = new MediaSource();
this.mediaSource.addEventListener('sourceopen', this.onSourceOpen);
Why does it hate me?
Before you try to create a SourceBuffer, you should always call MediaSource.isTypeSupported to determine whether it is likely to play. If that returns false, the user agent is telling you it definitely won't work.
On the latest Firefox:
>> MediaSource.isTypeSupported('audio/mpeg')
<- false
It hates you because Firefox's MediaSource implementation can't play content with that MIME type, whereas Chrome's can.
AAC in ISOBMFF has very broad support, though this would require transcoding and repackaging your audio - try:
MediaSource.isTypeSupported('audio/mp4; codecs="mp4a.40.2"')

webrtc configuration to reduce sent traffic

I am developing the function of audio video calls using webrtc technology. I faced with case when one of the participants is in an unstable network (mobile network). In this case, the participant’s audio and video starts to freeze, there is a delay, etc. I think that to solve this problem it is necessary to configure the application so that it sends less traffic.
Please tell me what webrtc configurations exist to reduce sent traffic?
Most heavy traffic will be video. One of the solutions will be limiting video quality or disabling it completely. You can limit video quality using this code:
const displayMediaStream = await getDisplayMediaStream();
let supports = navigator.mediaDevices.getSupportedConstraints();
if (!supports["width"] || !supports["height"] || !supports["frameRate"] || !supports["facingMode"]) {
// We're missing needed properties, so handle that error.
} else {
let constraints = {
width: { min: 640, ideal: 1920, max: 1920 },
height: { min: 400, ideal: 1080 },
aspectRatio: 1.777777778,
frameRate: { max: 30 }
};
displayMediaStream.getVideoTracks()[0].applyConstraints(constraints)
}
return displayMediaStream.getVideoTracks()[0];
You can play with the values.
Also, the problem could be in the browser codec. For example, FF in case of screen sharing uses codec which produces high-quality video stream which is good in case of static pictures, like sharing documents, the problem appears when users broadcast dynamic videos, like youtube videos, with screen sharing. In such a case, FF overloads the network by sending streams ~7 Gb. Meanwhile, Google Chrome is more intelligent and can adapt the traffic by using better codecs. I would do tests with different browsers and if the problem lays in FF, you can try to force FF to use better codecs, like same which used by Google Chrome, for that you have to modify SDP when detecting FF browser, you can do it like described here: How can I change the default Codec used in WebRTC?

QtWebEngine Quicknano has no Sound in Embedded Linux

I have compiled QtWebEngine into my i.MX6 embedded device. When I tried to play youtube Video with quicknanobrowser, the video plays but there would be no sound. In fact, there is no sound when I try to test play the audio files in hpr.dogphilosophy.net/test even though the website said that the browser codec is supported.
I have enabled pulseaudio, gstreamer, ffmpeg, opus, vpx, libwebp and yet still no sound.
However, I could play video with gst-launch and there would be sound.
Is it something wrong with quicknanobrowser that does not enable sound? Or is there components that I need to add to the embedded system?
Edit: Alsa and pulseaudio, GStreamer are all working fine with sound.
You need to force QtWebEngine to use ALSA. In embedded systems, it is disabled by default.
In qt5.7/qtwebengine/src/3rdparty/chromium/media/media.gyp, there is a test to check if we are on an embedded system:
# Enable ALSA and Pulse for runtime selection.
['(OS=="linux" or OS=="freebsd" or OS=="solaris") and ((embedded!=1 and chromecast==0) or is_cast_desktop_build==1)', {
# ALSA is always needed for Web MIDI even if the cras is enabled.
'use_alsa%': 1,
'conditions': [
['use_cras==1', {
'use_pulseaudio%': 0,
}, {
'use_pulseaudio%': 1,
}],
],
}, {
'use_alsa%': 0,
'use_pulseaudio%': 0,
}],
I changed last use_alsa% to 1 and in qt5.7/qtwebengine/src/core/config/embedded_linux.pri, I added a new flag:
use_alsa=1
With this settings I have audio on my embedded ARM Linux and with flag:
enable_webrtc=1
I am able to start a WebRTC session with video and audio.

Capture system sound from browser

I am trying to build a web app that captures both local and remote audios from a webrtc call, but I can`t record the remote audio(using recordRTC).
I was wondering if I could capture the system sound somehow.
Is there a way to capture the system sound (not just the mic) from the browser. Maybe an extension?
In Chrome, the chrome.desktopCapture extension API can be used to capture the screen, which includes system audio (but only on Windows and Chrome OS and without plans for OS X or Linux). E.g.
chrome.desktopCapture.chooseDesktopMedia([
'screen', 'window' // ('tab' is not supported; use chrome.tabCapture instead)
], function(streamId) {
navigator.webkitGetUserMedia({
audio: {
mandatory: {
chromeMediaSource: 'system',
chromeMediaSourceId: streamId
}
},
video: false, // We only want audio for now.
}, function(stream) {
// Do what you want with this MediaStream.
}, function(error) {
// Handle error
});
});
I'm not sure whether Firefox can capture system sound, but at the very least it is capable of capturing some output (tab/window/browser/OS?).
First you need to visit about:config and set media.getusermedia.audiocapture.enabled to true (this could be automated through a Firefox add-on). Then the stream can be captured as follows:
navigator.mozGetUserMedia({
audio: {
mediaSource: 'audioCapture'
},
video: false, // Just being explicit, we only want audio for now
}, function(stream) {
// Do what you want with this MediaStream.
}, function(error) {
// Handle error
});
This was implemented in Firefox 42, at https://bugzilla.mozilla.org/show_bug.cgi?id=1156472
This is possible with the new Screen Capture API, but browser support is still limited.
See the "Browser compatibility" section in the above-linked MDN page for details. Some browsers currently don't yet support audio capture, and some others currently only allow audio capture from a specific tab, rather than the operating system as a whole.
Example code:
videoElem.srcObject = await navigator.mediaDevices.getDisplayMedia({audio:true, video:true});

Howto: Save screencast to video file ChromeOS?

Two Chrome apps/extensions have caught my eye on the webstore:
Screencastify
Snagit
I am aware of chrome.desktopCapture and how I can use getUserMedia() to capture a live stream of a user's desktop.
Example:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: desktop_id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}, successCallback, errorCallback);
I'd love to create my own screencast app that allows audio recording as well as embedding webcam capture in a given corner of the video like Screencastify.
I understand capturing the desktop and the audio and video of the user, but how do you put it all together and make it into a video file?
I'm assuming there is a way to create a video file from a getUserMedia() stream on ChromeOS. Something that only ChromeOS has implemented?
How is it done? Thanks in advance for your answers.
The actual encoding and saving of the video file isn't something that's been implemented in Chrome as of yet. Mozilla has it in a nascent form at the moment. I'm unsure of its state in ChromeOS. I can give you a little information I've gleaned during development with the Chrome browser, however.
The two ways to encode, save, and distribute a media stream as a video are client-side and server-side.
Server-side:
Requires a media server of some kind. The best I've free/open-source solution that I've found so far is Kurento. The media stream is uploaded(chunks or whole) or streamed to the media server where it is encoded and saved for later use. This also works with peer-to-peer by acting as a middleman, recording as the data streams through.
Client-side:
This is all about browser-based encoding. There are currently two working options that I've tested successfully in Chrome.
Whammy.js:
This method uses a canvas hack to save arrays of webp images and then encode them into a webm container. While slow, it works well with video. No audio support. I'm working on that at the moment.
videoconverter.js(was ffmpeg-asm.js):
This is a straight port of ffmpeg to JavaScript using Emscripten. It works with both audio and video. It's also gigantic, script-wise, at around 25MB uncompressed. The other reason I'm not using it in production is the shaky licensing ground that ffmpeg is on at the moment.
It has not been optimized as much as it could be. It would probably be quite a project to make it reliably production-ready.
Hopefully that at least gives you avenues of research.