How to disable system audio enhancements using webRTC? - html

On different systems (Windows/Android/etc.) there are some "built-in" audio enhancements. For example AEC (autmatic echo cancellation), NR (noise reduction) and Automatic Gain Control. Everyone can have those turned off or on in any combination.
There are also audio enhancements on some browsers (i know about Chrome and Firefox)
It is possible to turn them all off using webRTC?
For all I know, it is possible to turn off those "browser enhancements" and I think I managed it by specifying mediaConstraints. Example for Chrome:
var mediaConstraints = {
audio: {
echoCancellation: { exact: false },
googEchoCancellation: { exact: false },
googAutoGainControl: { exact: false },
googNoiseSuppression: { exact: false },
}
}
Can't find solution to turn off system/device-specific audio enhancements.
There is similar question: WebRTC - disable all audio processing, but I think it addresses only those browser enhancements.

Related

All MIME types supported by MediaRecorder in Firefox and Chrome?

Where can I find a list of all MIME types that are supported by Firefox or Chrome? All examples I've seen so far using video/webm only.
I've not seen any sort of comprehensive list yet for Firefox but I have managed to find something (via a post on the MediaRecorder API from Google's web updates section) that links to this test set that seems to shed some light on things.
Essentially, it looks like the following are (at time of writing) accepted MIME types for video/audio in Chrome:
video/webm
video/webm;codecs=vp8
video/webm;codecs=vp9
video/webm;codecs=vp8.0
video/webm;codecs=vp9.0
video/webm;codecs=h264
video/webm;codecs=H264
video/webm;codecs=avc1
video/webm;codecs=vp8,opus
video/WEBM;codecs=VP8,OPUS
video/webm;codecs=vp9,opus
video/webm;codecs=vp8,vp9,opus
video/webm;codecs=h264,opus
video/webm;codecs=h264,vp9,opus
video/x-matroska;codecs=avc1
audio/webm
audio/webm;codecs=opus
(EDITED 2019-02-10: Updated to include brianchirls' link find)
I made this small function in my utils.js to get the best supported codec, with support for multiple possible naming variations (example : firefox support video/webm;codecs:vp9 but not video/webm;codecs=vp9)
You can reorder videoTypes, audioTypes and codecs arrays by priority, depending on your needs, so you'll always fall on the next supported type.
EDIT: Add support for audio, fixed mimetype duplicates
function getSupportedMimeTypes(media, types, codecs) {
const isSupported = MediaRecorder.isTypeSupported;
const supported = [];
types.forEach((type) => {
const mimeType = `${media}/${type}`;
codecs.forEach((codec) => [
`${mimeType};codecs=${codec}`,
`${mimeType};codecs=${codec.toUpperCase()}`,
// /!\ false positive /!\
// `${mimeType};codecs:${codec}`,
// `${mimeType};codecs:${codec.toUpperCase()}`
].forEach(variation => {
if(isSupported(variation))
supported.push(variation);
}));
if (isSupported(mimeType))
supported.push(mimeType);
});
return supported;
};
// Usage ------------------
const videoTypes = ["webm", "ogg", "mp4", "x-matroska"];
const audioTypes = ["webm", "ogg", "mp3", "x-matroska"];
const codecs = ["should-not-be-supported","vp9", "vp9.0", "vp8", "vp8.0", "avc1", "av1", "h265", "h.265", "h264", "h.264", "opus", "pcm", "aac", "mpeg", "mp4a"];
const supportedVideos = getSupportedMimeTypes("video", videoTypes, codecs);
const supportedAudios = getSupportedMimeTypes("audio", audioTypes, codecs);
console.log('-- Top supported Video : ', supportedVideos[0])
console.log('-- Top supported Audio : ', supportedAudios[0])
console.log('-- All supported Videos : ', supportedVideos)
console.log('-- All supported Audios : ', supportedAudios)
For Firefox, the accepted mimetypes can be found in MediaRecorder.cpp and confirmed using MediaRecorder.isTypeSupported(...)
Example:
21:31:27.189 MediaRecorder.isTypeSupported('video/webm;codecs=vp8')
21:31:27.135 true
21:31:41.598 MediaRecorder.isTypeSupported('video/webm;codecs=vp8.0')
21:31:41.544 true
21:32:10.477 MediaRecorder.isTypeSupported('video/webm;codecs=vp9')
21:32:10.431 false
21:31:50.534 MediaRecorder.isTypeSupported('audio/ogg;codecs=opus')
21:31:50.479 true
21:31:59.198 MediaRecorder.isTypeSupported('audio/webm')
21:31:59.143 false
MediaRecorder support for common audio codecs:
MediaRecorder.isTypeSupported('audio/webm;codecs=opus'); // true on chrome, true on firefox => SO OPUS IT IS!
MediaRecorder.isTypeSupported('audio/ogg;codecs=opus'); // false on chrome, true on firefox
MediaRecorder.isTypeSupported('audio/webm;codecs=vorbis'); // false on chrome, false on firefox
MediaRecorder.isTypeSupported('audio/ogg;codecs=vorbis'); // false on chrome, false on firefox
Firefox used Vorbis for audio recording in the 1st implementations but it
moved to Opus since.
So OPUS it is!
This may prove of interest:
MediaRecorder is currently experimental on Safari (August 2020.)
caniuse Opus
caniuse MediaRecorder
Based on #MillenniumFennec's answer (+ audio + removing duplicates + some other improvements):
function getAllSupportedMimeTypes(...mediaTypes) {
if (!mediaTypes.length) mediaTypes.push(...['video', 'audio'])
const FILE_EXTENSIONS = ['webm', 'ogg', 'mp4', 'x-matroska']
const CODECS = ['vp9', 'vp9.0', 'vp8', 'vp8.0', 'avc1', 'av1', 'h265', 'h.265', 'h264', 'h.264', 'opus']
return [...new Set(
FILE_EXTENSIONS.flatMap(ext =>
CODECS.flatMap(codec =>
mediaTypes.flatMap(mediaType => [
`${mediaType}/${ext};codecs:${codec}`,
`${mediaType}/${ext};codecs=${codec}`,
`${mediaType}/${ext};codecs:${codec.toUpperCase()}`,
`${mediaType}/${ext};codecs=${codec.toUpperCase()}`,
`${mediaType}/${ext}`,
]),
),
),
)].filter(variation => MediaRecorder.isTypeSupported(variation))
}
console.log(getAllSupportedMimeTypes('video', 'audio'))
Sorry, can't add comments; but thought it important to note:
Implementation of recording raw samples via ScriptProcessor or audioWorklet is flawed for a number of reasons, one here - mainly because it connects you to an output node, and clock 'correction' happens before you see the data.
So lack of audio/wav or other raw format really kills.
But just maybe.... seems 'audio/webm;codecs=pcm' is supported in chrome.
ISTYPESUPPORTED
Building on the previous answers (thanks #Fennec), I have created a jsfiddle to list all the supported types: https://jsfiddle.net/luiru72/rfhLcu26/5/. I also added a non-existent codec ("notatall").
Among the results of this script, if you call it from Firefox, you will find:
video/webm;codecs:vp9.0
video/webm;codecs=vp8
video/webm;codecs:vp8
video/webm;codecs:notatall
Note that you will not find "video/webm;codecs=vp9.0", and you will not find "video/webm;codecs=notatall" either.
This is because isTypeSupported on Firefox is able to understand the request "video/webm;codecs=vp9.0" or "video/webm;codecs=notatall" and it responds that it is not supported; but it is not able to understand the request "video/webm;codecs:vp9.0" or "video/webm;codecs:notatall", so isTypeSupported on Firefox (as of version 92.0, 2021-09-14) responds that it is supported.
MEDIARECORDER
I have created another jsfiddle to experiment with MediaRecorder: https://jsfiddle.net/luiru72/b9q4nsdv/42/
If you try to create a MediaRecorder on Firefox using the wrong syntax "video/webm;codecs:vp9,opus" or "video/webm;codecs:notatall,opus", you do not get an error, you just get a video encoded in VP8 and Opus. If you open the file using a tool like MediaInfo https://sourceforge.net/projects/mediainfo/, you realize that it is encoded in VP8,Opus.
If you specify "video/webm;codecs=vp8", you get an error because vp8 cannot encode audio. You need to specify both: "video/webm;codecs=vp8,opus", or you can just rely on defaults, specifying only the container format "video/webm". In this way, you now get the file encoded in VP8,Opus, but the actual video and audio encoder defaults could change over time, so if you want to be sure that VP8 and Opus are used, you need to specify them.
Key take away points:
you should use the syntax: video/webm;codecs=vp8, not video/webm;codecs:vp8
when creating a MediaRecorder, you should take extra-care: for example, on Firefox, video/webm;codecs=vp8 is supported, but when creating a MediaRecorder you should use "video/webm" or "video/webm;codecs=vp8,opus
if you specify an incorrect syntax, for example video/webm;codecs:vp9,opus in Firefox, you do not get an error, you just get a file that is encoded in VP8,opus. You only realize that it is in a different format from the one intended if you open it with a program like MediaInfo that is able to show you the codecs that have been used
I found a solution today which involves using
var canRecordVp9 = MediaRecorder.isTypeSupported('video/webm;codecs=vp9');
to differentiate between Chrome(and Opera) and Firefox, and then do
if (canRecordVp9)
{
mediaRecorder = new MediaRecorder(stream, {mimeType : 'video/webm;codecs=vp9'});
} else
{
mediaRecorder = new MediaRecorder(stream);
}
to construct the MediaRecorder accordingly.
Then, when grabbing the blob:
if (canRecordVp9)
{
blob = new Blob([myArrayBuffer], { "type" : "video/webm;codecs=vp9" });
} else
{
blob = new Blob([myArrayBuffer], { "type" : "video/webm" });
}
and finally, use the FileReader to get the blob as a dataUrl:
`
var reader = new FileReader();
reader.onload = function(event)
{
var blobDataUrl = event.target.result;
}
reader.readAsDataURL(blob);`
I then save the blobDataUrl as a webm file, and videos recorded in Chrome work fine in Firefox, and vice-versa.

RTCPeerConnection media stream working in firefox but not chrome

Trying to keep the problem as simple as possible, I am creating a media stream in a chrome extension like so:
var pc = new RTCPeerConnection(null);
chrome.desktopCapture.chooseDesktopMedia(['screen', 'window'], null, function(streamId) {
var constraints = {
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId
}
}
},
success = function(stream) {
pc.addStream(stream);
pc.createOffer(function(offer) {
pc.setLocalDescription(offer, function() {
send('make_offer', name, offer);
}, printError);
}, printError);
};
getUserMedia(constraints, success, printError);
});
For now, my offer is received by a peer visiting a page in a browser. That looks more or less like this (m is a message object with the offer):
var pc = new RTCPeerConnection(null);
pc.onaddstream = function(e) {
var video = document.getElementById('video');
video.src = URL.createObjectURL(e.stream);
};
pc.setRemoteDescription(new RTCSessionDescription(m.value), function() {
pc.createAnswer(function(answer) {
pc.setLocalDescription(answer, function() {
send('make_answer', m.from, answer);
}, printError);
}, printError);
}, printError);
I have done this, both with and without ice servers, which look like this when I use them:
var iceServers = {
iceServers: [
{url: 'stun:stun.l.google.com:19302'}
]
};
Right now, the peer receives and displays the stream perfectly in Firefox. No problem at all. But it's not working in Chrome. Here is some selected data from chrome://webrtc-internals:
connection to firefox:
"ssrc_3309930214_send-transportId": {
"startTime": "2014-09-30T01:41:11.525Z",
"endTime": "2014-09-30T01:41:21.606Z",
"values": "[\"Channel-video-1\",\"Channel-video-1\",\"Channel-video-1\"]"
},
"ssrc_3309930214_send-packetsLost": {
"startTime": "2014-09-30T01:41:11.525Z",
"endTime": "2014-09-30T01:41:21.606Z",
"values": "[0,0,0,0,0,0,0]"
},
connection to chrome:
"ssrc_1684026093_send-transportId": {
"startTime": "2014-09-30T01:41:57.310Z",
"endTime": "2014-09-30T01:42:00.313Z",
"values": "[\"Channel-audio-1\",\"Channel-audio-1\",\"Channel-audio-1\",\"Channel-audio-1\"]"
},
"ssrc_1684026093_send-packetsLost": {
"startTime": "2014-09-30T01:41:57.310Z",
"endTime": "2014-09-30T01:42:00.313Z",
"values": "[-1,-1,-1,-1]" // what is causing this??
},
Those seem important, but I'm not sure exactly of the implications. I have more data, but I'm not sure exactly what is important. The main idea, is that data goes out to firefox, but not to chrome, though no exceptions occur that I can see. The one further suspicious piece of data happens if I load the peer page in Chrome Canary (latest):
Failed to load resource: net::ERR_CACHE_MISS
This is a console error, and I don't know where it comes from. It occurs after the answer is sent from the peer back to the host (chrome extension).
Signaling done over wss://, test peer is hosted at https://
I'm not sure where to go from here.
Update: Based on answer and comment, I added a handler for onicecandidate:
pc.onicecandidate = function(e) {
console.log('This is the ice candidate.');
console.log(e);
if(!e.candidate) return console.warn('no candidate!');
send('got_ice_candidate', name, e.candidate);
};
I also set up an equivalent peer connection from browser to browser using video:
var constraints = {
audio: false,
video: true
};
getUserMedia(constraints, success, printError);
This works fine, both from Firefox to Chrome and vise-versa, so the issue may be chrome-extension-specific...
There is a difference in how ice gathering occurs between the successful case and the extension case:
Between browsers, there is no ice at all. There is one event, and e.candidate is null.
From extension to browser, there are lots of onicecandidate events. They are not all in agreement. So perhaps the chrome extension is confusing the STUN server? I don't know.
Thanks for your answers, would love any more insight that you have.
can you please add handling ice candidates on both sides ?
pc.onicecandidate = function(e){ send('ice_candidate', e.target) }
And on the other side on receiving this 'message' do
pc.addIceCandidate(new RTCIceCandidate(message));
Chrome sends ice candidates even after offer/answer have been exchanged which firefox does not seem to do.

Access 'Chrome dev-tool mobile emulator' from custom extension

I am trying to access 'Chrome dev-tool mobile emulator' from custom extension .
I am aware that I cant open dev-tool from custom extension.
Is there any way to trigger mobile emulator from the custom extension? If yes guidance/tutorials will do great help.
What I need - I select a mobile device from my extension and browser will change viewport, user-agent, sensor to emulate selected device. In short I need replica of dev-tool mobile emulator.
Any help/link/code/extension link will do great favour.
You'll have to use setDeviceMetricsOverride via the devtools protocol. You access it at the chrome.debugger chrome extension API. You'll use that method and probably others to set the UA and such.
Example code from my chrome extension.
Example (from #onsy):
chrome.debugger.sendCommand(debuggeeId, "Network.enable", {}, onResponse);
chrome.debugger.sendCommand(debuggeeId, "Network.setUserAgentOverride", {
userAgent: deviceData.userAgent}, onResponse);
chrome.debugger.sendCommand(debuggeeId, "Page.enable", {}, onResponse);
chrome.debugger.sendCommand(debuggeeId, "Page.setDeviceMetricsOverride", {
width: deviceData.width / deviceData.deviceScaleFactor,
height: deviceData.height / deviceData.deviceScaleFactor,
deviceScaleFactor: deviceData.deviceScaleFactor,
emulateViewport: true,
fitWindow: true,
textAutosizing: true,
fontScaleFactor: 1
}, onResponse);
chrome.debugger.sendCommand(debuggeeId, "Network.enable", {}, onResponse);
chrome.debugger.sendCommand(debuggeeId, "Network.setUserAgentOverride", {
userAgent: deviceData.userAgent}, onResponse);
chrome.debugger.sendCommand(debuggeeId, "Page.enable", {}, onResponse);
chrome.debugger.sendCommand(debuggeeId, "Page.setDeviceMetricsOverride", {
width: deviceData.width / deviceData.deviceScaleFactor,
height: deviceData.height / deviceData.deviceScaleFactor,
deviceScaleFactor: deviceData.deviceScaleFactor,
emulateViewport: true,
fitWindow: true,
textAutosizing: true,
fontScaleFactor: 1
}, onResponse);

In Google Chrome, what is the extension api for changing the UserAgent and Device Metrics?

In Google Chrome, when viewing the developer tools, in the bottom right there is a Gear Icon that opens an additional Settings popup. One of the pages in the Settings popup is Overrides that contains User Agent and Device Metrics settings. I am trying to find the extensions API that is able to set those values programatically. Does such an API exist?
I've looked the main apis, and the experimental apis , but can't seem to find anything.
The sample for devtools.panels in the code samples doesn't seem to indicate how to 'explore' the existing devpanels either.
Specifically I'm trying to build simple extension available from a context menu in a Browser Action. It would act like a user-agent switcher, offering choices from the same list in the Settings popup, and automatically sets the Device Metrics to the values of the selected Agent. e.g. 640x960 for IPhone 4.
Any leads on how to programatically access the Settings popup
Some of the advanced features offered by the Developer tools can be accessed through the chrome.debugger API (add the debugger permission to the manifest file).
The User agent can be changed using the Network.setUserAgentOverride command:
// Assume: tabId is the ID of the tab whose UA you want to change
// It can be obtained via several APIs, including but not limited to
// chrome.tabs, chrome.pageAction, chrome.browserAction, ...
// 1. Attach the debugger
var protocolVersion = '1.0';
chrome.debugger.attach({
tabId: tabId
}, protocolVersion, function() {
if (chrome.runtime.lastError) {
console.log(chrome.runtime.lastError.message);
return;
}
// 2. Debugger attached, now prepare for modifying the UA
chrome.debugger.sendCommand({
tabId: tabId
}, "Network.enable", {}, function(response) {
// Possible response: response.id / response.error
// 3. Change the User Agent string!
chrome.debugger.sendCommand({
tabId: tabId
}, "Network.setUserAgentOverride", {
userAgent: 'Whatever you want'
}, function(response) {
// Possible response: response.id / response.error
// 4. Now detach the debugger (this restores the UA string).
chrome.debugger.detach({tabId: tabId});
});
});
});
The official documentation for the supported protocols and commands can be found here. As of writing, there's no documentation for changing the Device metrics. However, after digging in Chromium's source code, I discovered a file which defined all currently known commands:
chromium/src/out/Debug/obj/gen/webcore/InspectorBackendDispatcher.cpp
When I look through the list of definitions, I find Page.setDeviceMetricsOverride. This phrase seems to match our expectations, so let's search further, to find out how to use it:
Chromium code search: "Page.setDeviceMetricsOverride"
This yields "chromium/src/out/Release/obj/gen/devtools/DevTools.js" (thousands of lines). Somewhere, there's a line defining (beautified):
InspectorBackend.registerCommand("Page.setDeviceMetricsOverride", [{
"name": "width",
"type": "number",
"optional": false
}, {
"name": "height",
"type": "number",
"optional": false
}, {
"name": "fontScaleFactor",
"type": "number",
"optional": false
}, {
"name": "fitWindow",
"type": "boolean",
"optional": false
}], []);
How to read this? Well, use your imagination:
chrome.debugger.sendCommand({
tabId: tabId
}, "Page.setDeviceMetricsOverride",{
width: 1000,
height: 1000,
fontScaleFactor: 1,
fitWindow: false
}, function(response) {
// ...
});
I've tested this in Chrome 25 using protocol version 1.0, and it works: The tab being debugged is resized. Yay!

SWFobject in a Chrome Extension - API Unavaiable

Hi!
I'm building a Chrome extension, in which I need to embed a SWFobject in the background page.
Everything works, except the JavaScript controls for the SWFobject and the eventListeners.
My guess is that it has something to do with the cross-domain policies, because while testing the page on a webserver everything worked fine.
Anyway, here's a snippet:
In the main page:
var playerView = chrome.extension.getBackgroundPage();
$('#playerPause').click(function(){
playerView.playerPause();
});
In the background:
function playerPause() {
if (postData[nowPlaying].provider == 'youtube' ) {
player.pauseVideo();
}
else if (postData[nowPlaying].provider == 'soundcloud' ) {
player.api_pause();
};
}
And the eventListeners:
soundcloud.addEventListener('onMediaEnd', playerNext);
function onYouTubePlayerReady(player) {
player.addEventListener("onStateChange", "function(state){ if(state == 0) { playerNext(); } }");
}
In the console it throws
"Uncaught TypeError: Object # has no method
'pauseVideo'"
for both the Youtube embed the Soundcloud one.
Also, the SWFobject is embedded like this (and works):
function loadTrack (id) {
if(postData[id].provider == 'youtube') {
swfobject.embedSWF(
"http://www.youtube.com/e/" + postData[id].url + "?enablejsapi=1&playerapiid=player",
"player",
"1",
"1",
"8",
null,
{
autoplay: 1
},
{
allowScriptAccess: "always"
},
{
id: "player"
}
);
}
else if(postData[id].provider == 'soundcloud') {
swfobject.embedSWF(
'http://player.soundcloud.com/player.swf',
'player',
'1',
'1',
'9.0.0',
'expressInstall.swf',
{
enable_api: true,
object_id: 'player',
url: postData[id].url,
auto_play: true
},
{
allowscriptaccess: 'always'
},
{
id: 'player',
name: 'player'
}
);
}
}
Sorry for the lengthy post, I wanted to provide as much information as possible.
Also, I know the code isn't pretty, this was only my second application ;)
Thanks a lot in advance to anyone who can help,
Giacomo
You can have a look at this extension, you can not access local connection in chrome extension, but you can run a content script as a proxy script instead.(You can serve a proxy page on gae or any other free servers)
The problem here is that you can't use inline scripts or inline event handlers in chrome extensions ever since the manifest evolved to v2.
You should have added the manifest file for me to understand what's going on better. But briefly all you CAN ever do is
In the main page,
Remove all inline scripts and move them to an external JS file.
Remove inline event listeners, move them to the same or another external JS and use
addEventListener().
But the issue is, You can't execute calls to the swf in the background page or expect it to return anything. All these will continue to give you "Uncaught TypeError" Exception.
Take the case of a webcam image capturing swf, the webcam will be streamed to the page, but the function call to it can never be made and hence the image will never be captured.
My project to scan QR codes from the addons popup met the ruins due to this.