HTML MediaRecorder recording lags on 1080p stream - google-chrome

I am reading a 1080p and 720p external video camera stream through getUserMedia API with the following config
let c_constraints = {
audio: false,
video: {
deviceId: { exact: videoValue },
width: { min: 320, ideal: 1280, max: 1920 },
height: { min: 144, ideal: 720, max: 1080 },
framerate: { min: 15, ideal: 30, max: 60 }
}
}
navigator.mediaDevices.getUserMedia(c_constraints)
.then((stream) => c_handleStream(stream))
.catch((e) => c_handleError(e));
And recording the stream using following
try {
c_recorder = new MediaRecorder(currentCameraStream);
} catch (e) {
throw e
}
Issue -> this works fine on high end devices, but on a device with 4GB RAM, Intel core i3 while recording in 1080p and 720p i get lags and the video is choppy only for first 3 seconds, but the remaining video is just fine.
also, on playing this recorded video in VLC media player, the timer directly jumps from 00:01 to 00:03,
if this was a memory/buffer issue, the other blobs would be affected too. am i missing something , should i use MediaRecorder in a different way, or with any different options.
PS: i have tried to use RecordRTC by Sir Muaz Khan, but it seems heavy for the CPU as CPU usage surges past 70% on the above mentioned machine which makes the machine super slow.
Please shed some light on this.

Finally got it working, in case if this helps anyone, if the machine you are recording is on a power saving mode, the audio/video not in sync issue can be resolved by enabling performance mode from control panel

Related

Avoid pops and clicks whilst recording HTML Audio

I am building an app that allows the user to record a message with the microphone and am using a node package called mic-recorder-to-mp3. The constructor takes a bit-rate setting which is currently 128 (which should be more than sufficient for voice recording).
We've started collecting recordings with this app and some are fine, but others are really awful with loud clicks and pops.
I understand that the sample-rate cannot be set and is based on the hardware you're using, but is there something else I am missing? Is that bit rate too high? Do I need to set more memory to the AudioBuffer? Any advice greatly appreciated.
I have discovered to my delight that you can now set constraints when you call getUsermedia which could improve the audio recording quality. They are in the web audio specs as MediaTrackConstraints: https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints
and usage:
export const audioTrackConstraints = {
echoCancellation: {exact: false},
autoGainControl: {exact: false},
noiseSuppression: {exact: false},
sampleRate: 16000,
}
navigator.mediaDevices.getUserMedia({ audio: audioTrackConstraints})
.then(() => {
console.log('Permission Granted');
})
.catch((error) => {
console.log(error.message);
})

Audio distortion occurs when using AudioWorkletProcessor with a MediaStream source and connecting a bluetooth device while it is already running

In our project, we use AudioContext to wire up input from a microphone to an AudioWorkletProcessor and out to a MediaStream. Ultimately, this is sent to other peers in a WebRTC call.
If someone loads the page, the audio always sounds fine. But if they connect with a hard-wired microphone like a laptop mic or webcam, then connect a bluetooth device (such as airpods or headphones), then the audio becomes distorted & robotic sounding.
If we tear out all the other code and simplify it, we still have the issue.
bypassProcessor.js
// Basic processor that wires input to output without transforming the data
// https://github.com/GoogleChromeLabs/web-audio-samples/blob/main/audio-worklet/basic/hello-audio-worklet/bypass-processor.js
class BypassProcessor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
output[channel].set(input[channel]);
}
return true;
}
}
registerProcessor('bypass-processor', BypassProcessor);
main.js
const microphoneStream = await navigator.mediaDevices.getUserMedia({
audio: true, // have also tried { channelCount: 1 } and { channelCount: { exact: 1 } }
video: false
})
const audioCtx = new AudioContext()
const inputNode = audioCtx.createMediaStreamSource(microphoneStream)
await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
inputNode.connect(processorNode).connect(audioCtx.destination)
Interestingly, I have found if you comment out the 2 audio worklet lines and instead create a simple gain node, then it works fine.
// await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
// const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
const gainNode = audioCtx.createGain()
Also if you simply create the AudioWorkletNode, but don't even connect it to the others, this also reproduces the issue.
I've created a small React app here that reproduces the problem: https://github.com/JacobMuchow/audio_distortion_repro/tree/master
I've tried some options such as detecting when this happens using 'ondevicechange' event, closing the old AudioContext & nodes and recreating everything, but this only works some of the time. If I wait for some time and then recreate it again, it works so I'm worried about some type of garbage collection issue with the processor when attempting this, but that might be beside the point.
I suspect this has something to do with sample rates... when the AudioContext is correctly recreated it switches from 48 kHz to 16 kHz and then it sounds find. But sometimes it is recreated with 48 kHz still and it continues to sound robotic.
Threads on the internet concerning this are incredibly sparse and I'm hoping someone has specific experience with this issue or this API and can point out what I need to do differently.
For Chrome, the problem is very likely https://crbug.com/1090441 that was recently fixed. I think Firefox doesn't have this problem but I didn't check.

How to improve Forge Viewer performance in local environment

I am trying to load a local model, and I am using the following load option:
option = {
"env": "Local",
"document": "0/0.svf",
"useADP": false,
"useConsolidation": true,
"consolidationMemoryLimit": 104857600,
"createWireframe": true,
"bvhOptions": {
"frags_per_leaf_node": 512,
"max_polys_per_node": 100000
},
"isAEC": true,
"disablePrecomputedNodeBoxes": true
}
var viewer = new Autodesk.Viewing.Private.GuiViewer3D(myViewerDiv, optionObject);
Autodesk.Viewing.Initializer(options, function () {
viewer.start(options.document, options);
Viewing the model in local environment is significantly slower (lower FPS, less responsive) when compared to "AutodeskProduction" environment using the same setup. Is there any additional settings that can further improve the performance? Thanks.
I'd say among these options a fined-tuned combination of useConsolidation and consolidationMemoryLimit probably did the trick for you - see here for details:
const initializerOptions = {
useConsolidation: true,
consolidationMemoryLimit: 150 * 1024 * 1024
}
However the balancing act here is when you have large numbers of BVH (as can be noticed in the BVHoptions) that might neutralize the performance gain so you'd want to play up to those factors.

How to disable system audio enhancements using webRTC?

On different systems (Windows/Android/etc.) there are some "built-in" audio enhancements. For example AEC (autmatic echo cancellation), NR (noise reduction) and Automatic Gain Control. Everyone can have those turned off or on in any combination.
There are also audio enhancements on some browsers (i know about Chrome and Firefox)
It is possible to turn them all off using webRTC?
For all I know, it is possible to turn off those "browser enhancements" and I think I managed it by specifying mediaConstraints. Example for Chrome:
var mediaConstraints = {
audio: {
echoCancellation: { exact: false },
googEchoCancellation: { exact: false },
googAutoGainControl: { exact: false },
googNoiseSuppression: { exact: false },
}
}
Can't find solution to turn off system/device-specific audio enhancements.
There is similar question: WebRTC - disable all audio processing, but I think it addresses only those browser enhancements.

getUserMedia() Screen share with Audio and update tray

When I use getUserMedia() for screen share, I don't get audio.
Things which I would like to do, but couldn't find any relevant stuff:
I want to capture both the screen and audio at the same time. How can I achieve this ?
When my screen share starts, the below tray appears. What it is called ? and how can I modify it (like its looks) ?
Screenshot:
if you want one stream made of your screensharing for the video track and your webcam/mike audio for the audio track, you will need to make 2 calls to getusermedia with constraints set to screen and audio, respectively. then you will have to put the tracks in a common stream. Eventually, you can attach that stream to a peer connection.
as peveuve said, you can also use two peer connections, but it comes with at least two problems:
you will not have synchronization between audio and video (not so important for screensahring)
you will need two connection => twice the number of ports => more chance to fail. That is more likely to be a problem.
this is a mandatory security feature from the browser (to prevent a rogue page to broadcast your screen without you knowing it). I do not know of a way to manipulate it at all
its possible with npm-msr on Chrome.
getScreenId(function (error, sourceId, screen_constraints) {
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});
Check this answer: Is it possible broadcast audio with screensharing with WebRTC
You can't share both screen and audio in the same peer, you have to open 2 peers.