Avoid pops and clicks whilst recording HTML Audio - html5-audio

I am building an app that allows the user to record a message with the microphone and am using a node package called mic-recorder-to-mp3. The constructor takes a bit-rate setting which is currently 128 (which should be more than sufficient for voice recording).
We've started collecting recordings with this app and some are fine, but others are really awful with loud clicks and pops.
I understand that the sample-rate cannot be set and is based on the hardware you're using, but is there something else I am missing? Is that bit rate too high? Do I need to set more memory to the AudioBuffer? Any advice greatly appreciated.

I have discovered to my delight that you can now set constraints when you call getUsermedia which could improve the audio recording quality. They are in the web audio specs as MediaTrackConstraints: https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints
and usage:
export const audioTrackConstraints = {
echoCancellation: {exact: false},
autoGainControl: {exact: false},
noiseSuppression: {exact: false},
sampleRate: 16000,
}
navigator.mediaDevices.getUserMedia({ audio: audioTrackConstraints})
.then(() => {
console.log('Permission Granted');
})
.catch((error) => {
console.log(error.message);
})

Related

Audio distortion occurs when using AudioWorkletProcessor with a MediaStream source and connecting a bluetooth device while it is already running

In our project, we use AudioContext to wire up input from a microphone to an AudioWorkletProcessor and out to a MediaStream. Ultimately, this is sent to other peers in a WebRTC call.
If someone loads the page, the audio always sounds fine. But if they connect with a hard-wired microphone like a laptop mic or webcam, then connect a bluetooth device (such as airpods or headphones), then the audio becomes distorted & robotic sounding.
If we tear out all the other code and simplify it, we still have the issue.
bypassProcessor.js
// Basic processor that wires input to output without transforming the data
// https://github.com/GoogleChromeLabs/web-audio-samples/blob/main/audio-worklet/basic/hello-audio-worklet/bypass-processor.js
class BypassProcessor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
output[channel].set(input[channel]);
}
return true;
}
}
registerProcessor('bypass-processor', BypassProcessor);
main.js
const microphoneStream = await navigator.mediaDevices.getUserMedia({
audio: true, // have also tried { channelCount: 1 } and { channelCount: { exact: 1 } }
video: false
})
const audioCtx = new AudioContext()
const inputNode = audioCtx.createMediaStreamSource(microphoneStream)
await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
inputNode.connect(processorNode).connect(audioCtx.destination)
Interestingly, I have found if you comment out the 2 audio worklet lines and instead create a simple gain node, then it works fine.
// await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
// const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
const gainNode = audioCtx.createGain()
Also if you simply create the AudioWorkletNode, but don't even connect it to the others, this also reproduces the issue.
I've created a small React app here that reproduces the problem: https://github.com/JacobMuchow/audio_distortion_repro/tree/master
I've tried some options such as detecting when this happens using 'ondevicechange' event, closing the old AudioContext & nodes and recreating everything, but this only works some of the time. If I wait for some time and then recreate it again, it works so I'm worried about some type of garbage collection issue with the processor when attempting this, but that might be beside the point.
I suspect this has something to do with sample rates... when the AudioContext is correctly recreated it switches from 48 kHz to 16 kHz and then it sounds find. But sometimes it is recreated with 48 kHz still and it continues to sound robotic.
Threads on the internet concerning this are incredibly sparse and I'm hoping someone has specific experience with this issue or this API and can point out what I need to do differently.
For Chrome, the problem is very likely https://crbug.com/1090441 that was recently fixed. I think Firefox doesn't have this problem but I didn't check.

Properly using chrome.tabCapture in a manifest v3 extension

Edit:
As the end of the year and the end of Manifest V2 is approaching I did a bit more research on this and found the following workarounds:
The example here that uses the desktopCapture API:
https://github.com/GoogleChrome/chrome-extensions-samples/issues/627
The problem with this approach is that it requires the user to select a capture source via some UI which can be disruptive. The --auto-select-desktop-capture-source command line switch can apparently be used to bypass this but I haven't been able to use it with success.
The example extension here that works around tabCapture not working in
service workers by creating its own inactive tab from
which to access the tabCapture API and record the currently
active tab:
https://github.com/zhw2590582/chrome-audio-capture
So far this seems to be the best solution I've found so far in terms of UX. The background page provided in Manifest V2 is essentially replaced with a phantom tab.
The roundaboutedness of the second solution also seems to suggest that the tabCapture API is essentially not intended for use in Manifest V3, or else there would have been a more straightforward way to use it. I am disappointed that Manifest V3 is being enforced while essentially leaving behind Manifest V2 features such as this one.
Original Post:
I'm trying to write a manifest v3 Chrome extension that captures tab audio. However as far as I can tell, with manifest v3 there are some changes that make this a bit difficult:
Background scripts are replaced by service workers.
Service workers do not have access to the chrome.tabCapture API.
Despite this I managed to get something that nearly works as popup scripts still have access to chrome.tabCapture. However, there is a drawback - the audio of the tab is muted and there doesn't seem to be a way to unmute it. This is what I have so far:
Query the service worker current tab from the popup script.
let tabId;
// Fetch tab immediately
chrome.runtime.sendMessage({command: 'query-active-tab'}, (response) => {
tabId = response.id;
});
This is the service worker, which response with the current tab ID.
chrome.runtime.onMessage.addListener(
(request, sender, sendResponse) => {
// Popup asks for current tab
if (request.command === 'query-active-tab') {
chrome.tabs.query({active: true}, (tabs) => {
if (tabs.length > 0) {
sendResponse({id: tabs[0].id});
}
});
return true;
}
...
Again in the popup script, from a keyboard shortcut command, use chrome.tabCapture.getMediaStreamId to get a media stream ID to be consumed by the current tab, and send that stream ID back to the service worker.
// On command, get the stream ID and forward it back to the service worker
chrome.commands.onCommand.addListener((command) => {
chrome.tabCapture.getMediaStreamId({consumerTabId: tabId}, (streamId) => {
chrome.runtime.sendMessage({
command: 'tab-media-stream',
tabId: tabId,
streamId: streamId
})
});
});
The service worker forwards that stream ID to the content script.
chrome.runtime.onMessage.addListener(
(request, sender, sendResponse) => {
...
// Popup sent back media stream ID, forward it to the content script
if (request.command === 'tab-media-stream') {
chrome.tabs.sendMessage(request.tabId, {
command: 'tab-media-stream',
streamId: request.streamId
});
}
}
);
The content script uses navigator.mediaDevices.getUserMedia to get the stream.
// Service worker sent us the stream ID, use it to get the stream
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
navigator.mediaDevices.getUserMedia({
video: false,
audio: true,
audio: {
mandatory: {
chromeMediaSource: 'tab',
chromeMediaSourceId: request.streamId
}
}
})
.then((stream) => {
// Once we're here, the audio in the tab is muted
// However, recording the audio works!
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = (e) => {
chunks.push(e.data);
};
recorder.onstop = (e) => saveToFile(new Blob(chunks), "test.wav");
recorder.start();
setTimeout(() => recorder.stop(), 5000);
});
});
Here is the code that implements the above: https://github.com/killergerbah/-test-tab-capture-extension
This actually does produce a MediaStream, but the drawback is that the sound of the tab is muted. I've tried playing the stream through an audio element, but that seems to do nothing.
Is there a way to obtain a stream of the tab audio in a manifest v3 extension without muting the audio in the tab?
I suspect that this approach might be completely wrong as it's so roundabout, but this is the best I could come up with after reading through the docs and various StackOverflow posts.
I've also read that the tabCapture API is going to be moved for manifest v3 at some point, so maybe the question doesn't even make sense to ask - however if there is a way to still properly use it I would like to know.
I found your post very useful in progressing my implementation of an audio tab recorder.
Regarding the specific muting issue you were running into, I resolved it by looking here: Original audio of tab gets muted while using chrome.tabCapture.capture() and MediaRecorder()
// Service worker sent us the stream ID, use it to get the stream
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
navigator.mediaDevices.getUserMedia({
video: false,
audio: true,
audio: {
mandatory: {
chromeMediaSource: 'tab',
chromeMediaSourceId: request.streamId
}
}
})
.then((stream) => {
// To resolve original audio muting
context = new AudioContext();
var audio = context.createMediaStreamSource(stream);
audio.connect(context.destination);
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = (e) => {
chunks.push(e.data);
};
recorder.onstop = (e) => saveToFile(new Blob(chunks), "test.wav");
recorder.start();
setTimeout(() => recorder.stop(), 5000);
});
});
This may not be exactly what you are looking for, but perhaps it may provide some insight.
I've tried playing the stream through an audio element, but that seems to do nothing.
Ironically this is how I managed to get around the issue; by creating an object in the popup itself. When using tabCapture in the popup script, it returns the stream, and I set the audio srcObject to that stream.
HTML:
<audio id="audioObject" autoplay> No source detected </audio>
JS:
chrome.tabCapture.capture({audio: true, video: false}, function(stream) {
var audio = document.getElementById("audioObject");
audio.srcObject = stream
})
According to this post on Manifest V3, chrome.capture will be the new namespace for tabCapture and the like, but I haven't seen anything beyond that.
I had this problem too, and I resolve it by using Web Audio API. Just create a new context and conect it to a media stream source using the captures MediaStream, this is an example:
avoidSilenceInTab: (desktopStream: MediaStream) => {
var contextTab = new AudioContext();
contextTab
.createMediaStreamSource(desktopStream)
.connect(contextTab.destination);
}

how to getUserMedia and record the video mixing the mp3 with javascript? mp3 can play pause and stop

I am using getUserMedia and mediaRecorder API to record an video from webcam.
I am using chrome version 80.
How to getUserMedia and record the video mixing the mp3 with javascript? mp3 can play pause and stop
I don't know how to mixing the mp3 to the video stream on live.
When I removeTrack and addTrack, I stop on MediaRecorder fail.
show Error: Failed to execute 'stop' on 'MediaRecorder': The MediaRecorder's state is 'inactive'.
my code on codepen: https://codepen.io/zhishaofei3/pen/eYNrYGj
and prime codes:
function getFileBuffer(filepath) {
return fetch(filepath, {method: 'GET'}).then(response => response.arrayBuffer())
}
function mp3play() {
getFileBuffer('song.mp3')
.then(buffer => context.decodeAudioData(buffer))
.then(buffer => {
console.log(buffer)
const source = context.createBufferSource()
source.buffer = buffer
let volume = context.createGain()
volume.gain.value = 1
source.connect(volume)
dest = context.createMediaStreamDestination()
volume.connect(dest)
// volume.connect(context.destination)
source.start(0)
const _audioTrack = stream.getAudioTracks();
if (_audioTrack.length > 0) {
_audioTrack[0].stop();
stream.removeTrack(_audioTrack[0]);
}
console.log(dest.stream)
console.log(dest.stream.getAudioTracks()[0])
stream.addTrack(dest.stream.getAudioTracks()[0])
})
}
thank you !
Many containers don't support adding/removing tracks like that, and it's doubtful the Media Recorder API does at all. It's an unusual thing to do.
You need to create the stream you're going to record before instantiating Media Recorder, with all of the tracks you want. Therefore, you need to do things in this order:
Set up your AudioContext.
Call getUserMedia(). (And while you're at it, set audio: false in your constraints. No need to open a microphone if you're not using one.)
videoStream.getVideoTracks() and dest.stream.getAudioTracks() to get all of the tracks.
Create a new MediaStream with those tracks. new MediaStream([audioTrack, videoTrack])
Now, run your MediaRecorder on this new MediaStream and you'll have what you want.

Play stream from gstreamer in browser

I want to play stream from gstreamer in a web browser.
I played around a with RTP, WebRTC and SDP files but, while VLC was able to connect to stream by simple SDP, browsers were not. I later understood that WebRTC requires secure connection which only complicates things and is not needed for my purposes. I stumbled upon Media Source Extension (MSE) of html5, which seems that it could help, but I'm not able to find some comprehensive tutorial or appropriate specs on how to get gstreamer to stream correct data and later how to play them using MSE. I'm also not sure about latency with using MSE.
So is there a way to play stream from gstreamer in a browser?
Thanks.
Using node webrtc project, I was able to combine output from gstreamer with webrtc call. For gstreamer, there is a project which enables it's use with node gstreamer superficial. So basically, you need to run gstremaer process from node process, which can then control output from gstremaer. On every gstreamer frame there is a callback called which takes the frame and can send it to webrtc calls.
Then an webrtc calls needs to be implemented. There is required some signaling protocol for calls. One side of the call will be the server and another will be the client's browser, instead of two browsers. Then a video track will be created where frames from gstreamer superficial will be pushed.
const { RTCVideoSource } = require("wrtc").nonstandard;
const gstreamer = require("gstreamer-superficial");
const source = new RTCVideoSource();
// This is WebRTC video track which should be used with addTransceiver see below
const track = source.createTrack();
const frame = {
width: 1920,
height: 1080,
data: null
};
const pipeline = new gstreamer.Pipeline("v4l2src ! videorate ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=25/1 ! videoconvert ! video/x-raw,format=I420 ! appsink name=sink");
const appsink = pipeline.findChild("sink");
const pull = function() {
appsink.pull(function(buf, caps) {
if (buf) {
frame.data = new Uint8Array(buf);
try {
source.onFrame(frame);
} catch (e) {}
pull();
} else if (!caps) {
console.log("PULL DROPPED");
setTimeout(pull, 500);
}
});
};
pipeline.play();
pull();
// Example:
const useTrack = SomeRTCPeerConnection => SomeRTCPeerConnection.addTransceiver(track, { direction: "sendonly" });

getUserMedia() Screen share with Audio and update tray

When I use getUserMedia() for screen share, I don't get audio.
Things which I would like to do, but couldn't find any relevant stuff:
I want to capture both the screen and audio at the same time. How can I achieve this ?
When my screen share starts, the below tray appears. What it is called ? and how can I modify it (like its looks) ?
Screenshot:
if you want one stream made of your screensharing for the video track and your webcam/mike audio for the audio track, you will need to make 2 calls to getusermedia with constraints set to screen and audio, respectively. then you will have to put the tracks in a common stream. Eventually, you can attach that stream to a peer connection.
as peveuve said, you can also use two peer connections, but it comes with at least two problems:
you will not have synchronization between audio and video (not so important for screensahring)
you will need two connection => twice the number of ports => more chance to fail. That is more likely to be a problem.
this is a mandatory security feature from the browser (to prevent a rogue page to broadcast your screen without you knowing it). I do not know of a way to manipulate it at all
its possible with npm-msr on Chrome.
getScreenId(function (error, sourceId, screen_constraints) {
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});
Check this answer: Is it possible broadcast audio with screensharing with WebRTC
You can't share both screen and audio in the same peer, you have to open 2 peers.