How to release memory using Web Audio API? - html

var context = new window.AudioContext()
var request = cc.loader.getXMLHttpRequest();
request.open("GET", 'res/raw-assets/resources/audio/bgm.mp3', true);
request.responseType = "arraybuffer";
request.onload = function () {
context["decodeAudioData"](request.response, function(buffer){
//success
cc.log('success')
window.buffer = buffer
playBgm()
}, function(){
//error
});
};
request.onerror = function(){
//error
};
request.send();
function playBgm(){
var audio = context["createBufferSource"]();
audio.buffer = buffer;
var _volume = context['createGain']();
_volume['gain'].value = 1;
_volume['connect'](context['destination']);
audio["connect"](_volume);
audio.start(0)
}
in my code I load a mp3 file and decode it into AudioBuffer(window.buffer)
then I play it successful
but it cost a lot memory about 100MB
How to release them?
I tried like this
audio.stop()
audio = null
window.buffer = null
//context.close()
//context = null
chrome memory view
in the chrome memory view
sometimes the memory release in about 10second
sometimes release in about 1min
sometimes seems never release
I want to know if my code is the right way to release audiobuffer?
Do I need to close AudioContext?

Related

getUserMedia and mediarecorder in html

I record audio via getUserMedia and mediarecorder:
...
navigator.mediaDevices.getUserMedia(constraints).then(mediaStream => {
try {
const mediaRecorder = new MediaRecorder(mediaStream);
mediaRecorder.ondataavailable = vm.mediaDataAvailable;
mediaRecorder.start(1000);
...
When I receive the chucks in the callback, I send them to a web api via websockets, which simply writes the parts one after another to a file:
mediaDataAvailable(e) {
if (!event.data || event.data.size === 0)
{
return;
}
vm.websocket.SendBlob(e.data);
...
The file, which is created on the webserver side in the webapi (lets call it "server.webm"), does not work correct. More exactly: It plays the first n seconds (n is the time I chose for the start command), the it stops. This means, the first chunk is transferred correctly, but it seams that adding the 2nd, 3rd, ... chuck together to a file does not work. If I push the chuncks in the web page on an array and the to a file, the resulting file (lets call it "client.webm") is working the whole recording duration.
Creating file on web client side:
mediaDataAvailable(e) {
if (!event.data || event.data.size === 0)
{
return;
}
vm.chuncks.push(e.data);
...
stopCapturing() {
var blob = new Blob(this.chuncks, {type: 'audio/webm'});
var url = window.URL.createObjectURL(blob);
var a = document.createElement('a');
a.style.display = 'none';
a.href = url;
a.download = 'client.webm';
document.body.appendChild(a);
a.click();
I compared the files client.webm and server.webm. They are quite similar, but there are certain parts in the server.webm which are not in the client.webm.
Anybody any ideas? Server code looks like the following:
private async Task Echo( HttpContext con, WebSocket webSocket)
{
System.IO.BinaryWriter Writer = new System.IO.BinaryWriter(System.IO.File.OpenWrite(#"server.webm"));
var buffer = new byte[1024 * 4];
WebSocketReceiveResult result = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
while (!result.CloseStatus.HasValue)
{
Writer.Write(buffer);
Writer.Flush();
await webSocket.SendAsync(new ArraySegment<byte>(buffer, 0, result.Count), result.MessageType, result.EndOfMessage, CancellationToken.None);
result = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
}
await webSocket.CloseAsync(result.CloseStatus.Value, result.CloseStatusDescription, CancellationToken.None);
Writer.Close();
}
resolved, I write all bytes of the reserved byte array to file in server code, not the count of received bytes.

Play an audio from an arraybuffer of bytes?

I Have a bufferarray comes from a rest endpoint I have created with Java that returns a byte[] array ; so I managed to get the array with HTTP (I am using Angular) now I want to play the audio in the browser. I have done some research and I found the web audio API but the error is that I can not decode the array.
context = new AudioContext();
audioArray : ArrayBuffer;
buf ;
let arrayBuffer = new ArrayBuffer(this.audioArray.byteLength);
let bufferView = new Uint8Array(arrayBuffer);
for (let i = 0; i < this.audioArray.byteLength; i++) {
bufferView[i] = this.audioArray[i];
}
console.log(arrayBuffer);
// this function should decode the array but the error occurs
// DOMException: Unable to decode audio data
this.context.decodeAudioData(this.audioArray).then((buffer)=>{
this.buf = buffer;
this.play();
}).catch((error)=>{
console.log(error);
});
console.log(this.audioArray);
play() {
let source = this.context.createBufferSource();
source.buffer = this.buf;
source.connect(this.context.destination);
source.start(0);
}
And I am getting the array in ngOnInit function by calling the rest API:)
this.radioService.generateVoiceMethode().subscribe(res => {
console.log(res);
this.audioArray = new ArrayBuffer(res.byteLength);
this.audioArray = res;
console.log(this.audioArray);
});
Though this is an old question, I am sharing an answer that worked for me. May be useful for some one in the future.
async play(data: ArrayBuffer) {
const context = new AudioContext();
const buffer = await context.decodeAudioData(data);
const source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.start();
}
The code above works in Angular 8.

HTML5 web audio controls

I have music play example http://www.smartjava.org/examples/webaudio/example3.html
And i need to show html5 audio player (with controls) for this song. How i can do it?
Javascript code from example below:
// create the audio context (chrome only for now)
// create the audio context (chrome only for now)
if (! window.AudioContext) {
if (! window.webkitAudioContext) {
alert('no audiocontext found');
}
window.AudioContext = window.webkitAudioContext;
}
var context = new AudioContext();
var audioBuffer;
var sourceNode;
var analyser;
var javascriptNode;
// get the context from the canvas to draw on
var ctx = $("#canvas").get()[0].getContext("2d");
// create a gradient for the fill. Note the strange
// offset, since the gradient is calculated based on
// the canvas, not the specific element we draw
var gradient = ctx.createLinearGradient(0,0,0,300);
gradient.addColorStop(1,'#000000');
gradient.addColorStop(0.75,'#ff0000');
gradient.addColorStop(0.25,'#ffff00');
gradient.addColorStop(0,'#ffffff');
// load the sound
setupAudioNodes();
loadSound("http://www.audiotreasure.com/mp3/Bengali/04_john/04_john_04.mp3");
function setupAudioNodes() {
// setup a javascript node
javascriptNode = context.createScriptProcessor(2048, 1, 1);
// connect to destination, else it isn't called
javascriptNode.connect(context.destination);
// setup a analyzer
analyser = context.createAnalyser();
analyser.smoothingTimeConstant = 0.3;
analyser.fftSize = 512;
// create a buffer source node
sourceNode = context.createBufferSource();
sourceNode.connect(analyser);
analyser.connect(javascriptNode);
sourceNode.connect(context.destination);
}
// load the specified sound
function loadSound(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
// When loaded decode the data
request.onload = function() {
// decode the data
context.decodeAudioData(request.response, function(buffer) {
// when the audio is decoded play the sound
playSound(buffer);
}, onError);
}
request.send();
}
function playSound(buffer) {
sourceNode.buffer = buffer;
sourceNode.start(0);
}
// log if an error occurs
function onError(e) {
console.log(e);
}
// when the javascript node is called
// we use information from the analyzer node
// to draw the volume
javascriptNode.onaudioprocess = function() {
// get the average for the first channel
var array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(array);
// clear the current state
ctx.clearRect(0, 0, 1000, 325);
// set the fill style
ctx.fillStyle=gradient;
drawSpectrum(array);
}
function drawSpectrum(array) {
for ( var i = 0; i < (array.length); i++ ){
var value = array[i];
ctx.fillRect(i*5,325-value,3,325);
// console.log([i,value])
}
};
I think what you want is to use an audio tag for your source and use createMediaElementSource to pass the audio to webaudio for visualization.
Beware that createMediaElementSource checks for CORS access so you must have appropriate cross-origin access for this to work. (It looks like your audio source doesn't return the appropriate access headers for this to work.)

How to get media stream object form HTML5 video element in javascript

all
I'm in peer to peer communication using webRTC , we have media stream object from the getUserMedia which is given as input stream to peerconnection. Here I need video stream from the selected video file from the local drive which is playing using Video element of HTML5.
Is it possible to create mediastream object from the video tag?
thanks,
suri
For now you can't add a media stream from a video tag, but it should be possible in the future, as it is explained on MDN
MediaStream objects have a single input and a single output. A MediaStream object generated by getUserMedia() is called local, and has as its source input one of the user's cameras or microphones. A non-local MediaStream may be representing to a media element, like or , a stream originating over the network, and obtained via the WebRTC PeerConnection API, or a stream created using the Web Audio API MediaStreamAudioSourceNode.
But you can use Media Source Extensions API to do what yo want : you have to put the local file into a stream and append in in a MediaSource object. You can learn more about MSE here : http://www.w3.org/TR/media-source/
And you can find a demo and source of the method above here
2021 update: It is now possible using MediaRecorder interface: https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder
Example from same page:
if (navigator.mediaDevices) {
console.log('getUserMedia supported.');
var constraints = { audio: true };
var chunks = [];
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
var mediaRecorder = new MediaRecorder(stream);
visualize(stream);
record.onclick = function() {
mediaRecorder.start();
console.log(mediaRecorder.state);
console.log("recorder started");
record.style.background = "red";
record.style.color = "black";
}
stop.onclick = function() {
mediaRecorder.stop();
console.log(mediaRecorder.state);
console.log("recorder stopped");
record.style.background = "";
record.style.color = "";
}
mediaRecorder.onstop = function(e) {
console.log("data available after MediaRecorder.stop() called.");
var clipName = prompt('Enter a name for your sound clip');
var clipContainer = document.createElement('article');
var clipLabel = document.createElement('p');
var audio = document.createElement('audio');
var deleteButton = document.createElement('button');
clipContainer.classList.add('clip');
audio.setAttribute('controls', '');
deleteButton.innerHTML = "Delete";
clipLabel.innerHTML = clipName;
clipContainer.appendChild(audio);
clipContainer.appendChild(clipLabel);
clipContainer.appendChild(deleteButton);
soundClips.appendChild(clipContainer);
audio.controls = true;
var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
chunks = [];
var audioURL = URL.createObjectURL(blob);
audio.src = audioURL;
console.log("recorder stopped");
deleteButton.onclick = function(e) {
evtTgt = e.target;
evtTgt.parentNode.parentNode.removeChild(evtTgt.parentNode);
}
}
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
})
.catch(function(err) {
console.log('The following error occurred: ' + err);
})
}
MDN also has a detailed mini tutorial: https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API/Recording_a_media_element

Chrome Web Audio doesn't play my audio

I just have the most basic (I think) implementation of Chrome's Web Audio.
I'm on Chrome 13 with Web Audio enabled, but my sound just doesn't play. I see it gets loaded (http://pieterhordijk.com/sandbox/html5-audio-api/webkit-audiocontext-interface), but it just doesn't play.
window.onload = init;
var context;
var source;
function loadSample(url) {
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer";
request.onload = function() {
source.buffer = context.createBuffer(request.response, false);
source.looping = true;
source.noteOn(0);
}
request.send();
}
function init()
{
context = new webkitAudioContext();
source = context.createBufferSource();
loadSample("/sandbox/test.oga");
}
Ok found the answer myself I had to connect it to the output.
source.connect(context.destination);