I'm trying to build something using the processor node here. Almost anything I do in terms of debugging it crashes chrome. Specifically the tab. Whenever I bring up dev tools, and 100% of the time i put a breakpoint in the onaudioprocess node, the tab dies and I have to either find the chrome helper process for that tab or force quit chrome altogether to get started agin. Its basically crippled my development for the time being. Is this a known issue? Do I need to take certain precautions to prevent chrome from crashing? Are the real time aspects are the web audio api simply not debuggable?
Without seeing your code, it's a bit hard to diagnose the problem.
Does running this code snippet crash your browser tab?
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
function onPlay() {
let scriptProcessor = audioCtx.createScriptProcessor(4096, 2, 2);
scriptProcessor.onaudioprocess = onAudioProcess;
scriptProcessor.connect(audioCtx.destination);
let oscillator = audioCtx.createOscillator();
oscillator.type = "sawtooth";
oscillator.frequency.value = 220;
oscillator.connect(scriptProcessor);
oscillator.start();
}
function onAudioProcess(event) {
let { inputBuffer, outputBuffer } = event;
for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let outputData = outputBuffer.getChannelData(channel);
for (let sample = 0; sample < inputBuffer.length; sample++) {
outputData[sample] = inputData[sample];
// Add white noise to oscillator.
outputData[sample] += ((Math.random() * 2) - 1) * 0.2;
// Un-comment the following line to crash the browser tab.
// console.log(sample);
}
}
}
<button type="button" onclick="onPlay()">Play</button>
If it crashes, there's something else in your local dev environment causing you problems, because it runs perfectly for me.
If not, then maybe you are doing a console.log() (or some other heavy operation) in your onaudioprocess event handler? Remember, this event handler processes thousands of audio samples every time it is called, so you need to be careful what you do with it. For example, try un-commenting the console.log() line in the code snippet above – your browser tab will crash.
Related
In our project, we use AudioContext to wire up input from a microphone to an AudioWorkletProcessor and out to a MediaStream. Ultimately, this is sent to other peers in a WebRTC call.
If someone loads the page, the audio always sounds fine. But if they connect with a hard-wired microphone like a laptop mic or webcam, then connect a bluetooth device (such as airpods or headphones), then the audio becomes distorted & robotic sounding.
If we tear out all the other code and simplify it, we still have the issue.
bypassProcessor.js
// Basic processor that wires input to output without transforming the data
// https://github.com/GoogleChromeLabs/web-audio-samples/blob/main/audio-worklet/basic/hello-audio-worklet/bypass-processor.js
class BypassProcessor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
output[channel].set(input[channel]);
}
return true;
}
}
registerProcessor('bypass-processor', BypassProcessor);
main.js
const microphoneStream = await navigator.mediaDevices.getUserMedia({
audio: true, // have also tried { channelCount: 1 } and { channelCount: { exact: 1 } }
video: false
})
const audioCtx = new AudioContext()
const inputNode = audioCtx.createMediaStreamSource(microphoneStream)
await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
inputNode.connect(processorNode).connect(audioCtx.destination)
Interestingly, I have found if you comment out the 2 audio worklet lines and instead create a simple gain node, then it works fine.
// await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
// const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
const gainNode = audioCtx.createGain()
Also if you simply create the AudioWorkletNode, but don't even connect it to the others, this also reproduces the issue.
I've created a small React app here that reproduces the problem: https://github.com/JacobMuchow/audio_distortion_repro/tree/master
I've tried some options such as detecting when this happens using 'ondevicechange' event, closing the old AudioContext & nodes and recreating everything, but this only works some of the time. If I wait for some time and then recreate it again, it works so I'm worried about some type of garbage collection issue with the processor when attempting this, but that might be beside the point.
I suspect this has something to do with sample rates... when the AudioContext is correctly recreated it switches from 48 kHz to 16 kHz and then it sounds find. But sometimes it is recreated with 48 kHz still and it continues to sound robotic.
Threads on the internet concerning this are incredibly sparse and I'm hoping someone has specific experience with this issue or this API and can point out what I need to do differently.
For Chrome, the problem is very likely https://crbug.com/1090441 that was recently fixed. I think Firefox doesn't have this problem but I didn't check.
I have what I assumed was a pretty standard service worker at http://www.espruino.com/ide/serviceworker.js for the page http://www.espruino.com/ide
However recently, when I have "Update on reload" set in the Chrome dev console for Service Workers the website stays with its loading indicator on, and the status shows a new service worker is "Trying to Install".
Eventually I see a red 'x' by the new service worker and a '1' with a link, but that doesn't do anything or provide any tooltip. Clicking on serviceworker.js brings me to the source file with the first line highlighted in yellow, but there are no errors highlighted.
I've done the usual and checked that all files referenced by the service worker exist and they do, and I have no idea what to look at next.
Does anyone have any clues how to debug this further?
thanks!
I'm on Chrome Beta.
I updated to the newest release a magically everything works. So I guess it was a bug in Chrome or the devtools, not my code.
For those running in to this issue with the latest version of Chrome, I was able to fix it by caching each resource in its own cache. Just call caches.open for every file you want to store. You can do this because caches.match will automatically find the file in your sea of caches.
As a messy example:
self.addEventListener('install', event => {
event.waitUntil(swpromise);
var swpromise = new Promise(function(resolve,reject) {
for (var i = 0; i < resources_array.length; i++) {
addToCache(i,resources_array[i]);
}
function addToCache(index,url) {
caches.open(version+"-"+index).then(cache => cache.addAll([url])).then(function() {cacheDone()});
}
var havedone = 0;
function cacheDone() {
havedone++;
if (havedone == resources_array.length) {
resolve();
}
}
})
I used the version number and the index of the file as the cache key for each file. I then delete all of the old caches on activation of the service worker with something similar to the following code:
addEventListener('activate', e => {
e.waitUntil(caches.keys().then(keys => {
return Promise.all(keys.map(key => {
if (key.indexOf(version+"-") == -1) return caches.delete(key);
}));
}));
});
Hopefully this works for you, it did in my case.
Is it possible to tell if another Chrome tab is using webkitSpeechRecognition?
If you try to use webkitSpeechRecognition while another tab is using it, it will throw an error "aborted" without any message. I want to be able to know if webkitSpeechRecognition is open in another tab, and if so, throw a better error that could notify the user.
Unless your customer is on the same website(you could check by logging the ip/browserprint in database and requesting by json) you cannot do that.
Cross domain protection is in effect, and that lets you know zilch about what happens in other tabs or frames.
I am using webkitSpeechRecognition for chrome ( does not work on FF) and I faced same issues like multiple Chrome tabs. Until the browser implement a better error message a temporary solutions that work for me:
You need to detect when a tab is focused or not in Chrome using
Javascript.
Make javascript code like this
isChromium = window.chrome;
if(isChromium)
{
if (window.addEventListener)
{
// bind focus event
window.addEventListener("focus", function (event)
{
console.log("Browser tab focus..");
recognition.stop();// to avoid error
recognition.start();
}, false);
window.addEventListener("blur", function (event)
{
console.log("Browser tab blur..");
recognition.stop();
}, false);
}
}
There's a small workaround for it. You can store the timestamp in a variable upon activating SpeechRecognition and when it exits after a few seconds of inactivity, it will be compared to a timestamp since the SpeechRecognition was activated. Since two tabs are using the API simultaneously, it will exit immediately.
For Chrome, you can use the code below and modify it base on your needs. Firefox doesn't support this yet at the moment.
var transcriptionStartTime;
var timeSinceLastStart;
function configureRecognition(){
var webkitSpeechRecognition = window.webkitSpeechRecognition || window.SpeechRecognition;
if ('webkitSpeechRecognition' in window) {
recognition = new webkitSpeechRecognition();
recognition.continuous = true;
recognition.interimResults = true;
recognition.lang = "en-US";
recognition.onend = function() {
timeSinceLastStart = new Date().getTime() - transcriptionStartTime;
if (timeSinceLastStart < 100) {
alert('Speech recognition failed to start. Please close the tab that is currently using it.');
}
}
}
}
See browser compatibility here: https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
As discussed in a previous question, I have built a prototype (using MVC Web API, NAudio and NAudio.Lame) that is streaming live low quality audio after converting it to mp3. The source stream is PCM: 8K, 16-bit, mono and I'm making use of html5's audio tag.
On both Chrome and IE11 there is a 15-34 second delay (high-latency) before audio is heard from the browser which, I'm told, is unacceptable for our end users. Ideally the latency would be no more than 5 seconds. The delay occurs even when using the preload="none" attribute within my audio tag.
Looking more closely at the issue, it appears as though both browsers will not start playing audio until they have received ~32K of audio data. With that in mind, I can affect the delay by changing Lame's MP3 'bitrate' setting. However, if I reduce the delay (by sending more data to the browser for the same length of audio), I will introduce audio drop-outs later.
Examples:
If I use Lame's V0 encoding the delay is nearly 34 seconds which requires almost 0.5 MB of source audio.
If I use Lame's ABR_32 encoding, I can reduce the delay to 10-15 seconds but I will experience pauses and drop-outs throughout the listening session.
Questions:
Any ideas how I can minimize the start-up delay (latency)?
Should I continue investigating various Lame 'presets' in hopes of picking the "right" one?
Could it be that MP3 is not the best format for live streaming?
Would switching to Ogg/Vorbis (or Ogg/OPUS) help?
Do we need to abandon HTML5's audio tag and use Flash or a java applet?
Thanks.
You can not reduce the delay, since you have no control on the browser code and buffering size. HTML5 specification does not enforce any constraint, so I don't see any reason why it would improve.
You can however implement a solution with webaudio API (it's quite simple), where you handle streaming yourself.
If you can split your MP3's chunk in fixed size (so that each MP3 chunks size is known beforehand, or at least, at receive time), then you can have a live streaming in 20 lines of code. The chunk size will be your latency.
The key is to use AudioContext::decodeAudioData.
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var offset = 0;
var byteOffset = 0;
var minDecodeSize = 16384; // This is your chunk size
var request = new XMLHttpRequest();
request.onprogress = function(evt)
{
if (request.response)
{
var size = request.response.length - byteOffset;
if (size < minDecodeSize) return;
// In Chrome, XHR stream mode gives text, not ArrayBuffer.
// If in Firefox, you can get an ArrayBuffer as is
var buf;
if (request.response instanceof ArrayBuffer)
buf = request.response;
else
{
ab = new ArrayBuffer(size);
buf = new Uint8Array(ab);
for (var i = 0; i < size; i++)
buf[i] = request.response.charCodeAt(i + byteOffset) & 0xff;
}
byteOffset = request.response.length;
context.decodeAudioData(ab, function(buffer) {
playSound(buffer);
}, onError);
}
};
request.open('GET', url, true);
request.responseType = expectedType; // 'stream' in chrome, 'moz-chunked-arraybuffer' in firefox, 'ms-stream' in IE
request.overrideMimeType('text/plain; charset=x-user-defined');
request.send(null);
function playSound(buffer) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
source.start(offset); // play the source now
// note: on older systems, may have to use deprecated noteOn(time);
offset += buffer.duration;
}
I have captured 3 videos on my mobile which is by default stored on the phone gallery (Gallery/videos/). I have to play these 3 videos in one of my flex mobile application. How can I get the videos to the flex project? if I need to browse the mobile directory means kindly help me with some code to do so.
I too am looking for an answer to this question. Right now, based on other Stackoverflow discussions, exhaustive perusal of tutorials and Adobe documentation, and comments to both (often the more useful resource), I'm coming to the conclusion that it's not possible.
you can use CameraRoll.browseForImage() and open the iOS gallery of photos to see all entities of MediaType.IMAGE, but it will not show you MediaType.VIDEO
you can use CameraUI to launch the system camera by delegation and that returns a MediaPromise, but as far as I can tell, it does not save the video you capture anywhere, and I cannot find a way to access the captured video using the MediaPromise (at least using the Loader class)
Here's my code as a hint in that direction. The second code block is using the CameraRoll to browseForImage() but there is no browseForVideo() in the API.
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
if (CameraRoll.supportsBrowseForImage)
{
roll = new CameraRoll();
roll.addEventListener(MediaEvent.SELECT, cameraRollEventComplete);
roll.addEventListener(Event.CANCEL, cameraCanceled);
roll.addEventListener(ErrorEvent.ERROR, cameraError);
roll.browseForImage();
}
else
{
statusText.text = "Camera roll not supported on this device.";
startTimer();
}
I've since found that Videos captured using the delegated system camera are stored in a temporary storage location that iOS -DOES!- allow access to. (I was pleasantly shocked.)
The Captured video is not added to the device's Camera Roll as other videos captured using the iOS System Camera app, so it's not enough to capture video and expect to be able to access it later (if, for instance, CameraRoll.browseForVideo() is ever added to the API.
Therefore, you have to 'get while the getting is good' and move the file from the temporary storage location to some non-volatile location such as ApplicationStorageDirectory or the user's Documents directory (The only options in iOS I think).
The MediaPromise... I think... is completely useless for accessing the video via any direct progressive loader/streamer method, but still provides the location/url/path/filename of the temporary file so you can perform File operations on it.
Ironic that there are tutorials for getting around the lack of a file location/url/path/filename in the MediaPromise when using CameraRoll.browseForImage()... and that method is to use a loader class to load the image content (which you can then write out to a file), but when taking video, the video content is not accessible, and instead a file location/url/path/filename is provided. Ironic that there are nearly no resources I was able to find to help with this also. grumble
I'm going to include some code chunks w/o really editing them to strip out extraneous bits because it's way past when I need to be in bed, but I wanted you to have this. I may come clean it up later.
This section is in a Spark SkinnablePopUpContainer and I use the same click event for several buttons, thus the below 'case' is in the switch-case in that event handler function.
In case you are not familiar, the 'close(true, data)' is the method to close the SkinnablePopUpContainer, tell the parent/owner that the container was closed purposefully and that it should look for the data object being shared back (i.e., there are changes to be 'commit'ed).
case "cameraVideo":
{
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
break;
}
protected function cameraCanceled(event:Event):void
{
statusText.text = "Camera access canceled by user.";
startTimer();
}
protected function cameraError(event:ErrorEvent):void
{
statusText.text = "There was an error while trying to use the camera.";
startTimer();
}
protected function videoMediaEventComplete(event:MediaEvent):void
{
statusText.text="Preparing captured video...";
camera.removeEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.removeEventListener(Event.CANCEL, cameraCanceled);
camera.removeEventListener(ErrorEvent.ERROR, cameraError);
var media:MediaPromise = event.data;
data.MediaType = MediaType.VIDEO;
data.MediaPromise = media;
data.source = "camera video";
close(true,data)
}
This section is the Actionscript in the close handler of the parent/owner of the SkinnablePopUpContainer (truncated once the useful code is included)
private function choosePictureLightboxClosed(event:PopUpEvent):void
{
imageButtonsActive = false;
if(event.commit)
{
this.data = event.data as Object;
filters = new Array();
selection = true;
switch(data.MediaType)
{
case MediaType.VIDEO:
{
mediaType = "video";
trace(data.MediaPromise.file.url + " - " + data.MediaPromise.relativePath + " - " +data.MediaPromise.mediaType);
var sourceFile:File = new File(data.MediaPromise.file.url);
var destinationFile:File = File.applicationStorageDirectory.resolvePath("User" +parentApplication.userid);
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath("Videos");
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath(parentApplication.userid+"Video"+new Date().getTime()+".mov");
trace(destinationFile.nativePath);
sourceFile.moveTo(destinationFile,true);
break;
}
I sure do hope this helps. This has been a very frustrating (and costly in terms of our project being government grant funded and having deadlines we utterly failed to meet), and I very much hope that these hard-won solutions might help others avoid the same experience.