Safari - HTML5 Audio - html

I'm using javascript to dynamically populate the audio tag with source info. Works fine in Chrome, but in Safari the source info changes within the audio tag and yet it plays the same song. Any ideas as to why this would be happening?
http://www.chicagowebguru.com/HTML5Player/

In Safari, when you change the source, you also have to call .load() on the audio player to get it to actually load the new source.
Other browsers don't need this it seems.

Have you tried recreating the node?
document.removeChild(document.getElementById("audio-player");
var dynamicAudio = document.createElement("audio");
dynamicAudio.name = "audio-player";
dynamicAudio.id = "audio-player";
var dynamicSound1 = document.createElement("source");
var dynamicSound2 = document.createElement("source");
var dynamicSound3 = document.createElement("source");
dynamicSound1.src = "http://www.chicagowebguru.com/audio/RainbowConnection.mp3";
dynamicSound2.src = "http://www.chicagowebguru.com/audio/RainbowConnection.ogg";
dynamicSound3.src = "http://www.chicagowebguru.com/audio/RainbowConnection.wav";
dynamicAudio.appendChild(dynamicSound1);
dynamicAudio.appendChild(dynamicSound2);
dynamicAudio.appendChild(dynamicSound3);
refNode = document.getElementById("message-container");
document.body.insertBefore(dynamicAudio,refNode.nextSibling);
or you could use jQuery maybe

Related

Empty microphone data from getUserMedia

Using the following code I get all zeroes in the audio stream from my microphone (using Chrome):
navigator.mediaDevices.getUserMedia({audio:true}).then(
function(stream) {
var audioContext = new AudioContext();
var source = audioContext.createMediaStreamSource(stream);
var node = audioContext.createScriptProcessor(8192, 1, 1);
source.connect(node);
node.connect(audioContext.destination);
node.onaudioprocess = function (e) {
console.log("Audio:", e.inputBuffer.getChannelData(0));
};
}).catch(function(error) {console.error(error);})
I created a jsfiddle here: https://jsfiddle.net/g3dck4dr/
What's wrong here?
Umm, something in your hardware config is wrong? The fiddle works fine for me (that is, it shows non-zero values). Do other web audio input tests work, like https://webaudiodemos.appspot.com/input/index.html?
Test to make sure you've selected the right input, and you don't have a hardware mute switch on.

HTML5 <audio> poor choice for LIVE streaming?

As discussed in a previous question, I have built a prototype (using MVC Web API, NAudio and NAudio.Lame) that is streaming live low quality audio after converting it to mp3. The source stream is PCM: 8K, 16-bit, mono and I'm making use of html5's audio tag.
On both Chrome and IE11 there is a 15-34 second delay (high-latency) before audio is heard from the browser which, I'm told, is unacceptable for our end users. Ideally the latency would be no more than 5 seconds. The delay occurs even when using the preload="none" attribute within my audio tag.
Looking more closely at the issue, it appears as though both browsers will not start playing audio until they have received ~32K of audio data. With that in mind, I can affect the delay by changing Lame's MP3 'bitrate' setting. However, if I reduce the delay (by sending more data to the browser for the same length of audio), I will introduce audio drop-outs later.
Examples:
If I use Lame's V0 encoding the delay is nearly 34 seconds which requires almost 0.5 MB of source audio.
If I use Lame's ABR_32 encoding, I can reduce the delay to 10-15 seconds but I will experience pauses and drop-outs throughout the listening session.
Questions:
Any ideas how I can minimize the start-up delay (latency)?
Should I continue investigating various Lame 'presets' in hopes of picking the "right" one?
Could it be that MP3 is not the best format for live streaming?
Would switching to Ogg/Vorbis (or Ogg/OPUS) help?
Do we need to abandon HTML5's audio tag and use Flash or a java applet?
Thanks.
You can not reduce the delay, since you have no control on the browser code and buffering size. HTML5 specification does not enforce any constraint, so I don't see any reason why it would improve.
You can however implement a solution with webaudio API (it's quite simple), where you handle streaming yourself.
If you can split your MP3's chunk in fixed size (so that each MP3 chunks size is known beforehand, or at least, at receive time), then you can have a live streaming in 20 lines of code. The chunk size will be your latency.
The key is to use AudioContext::decodeAudioData.
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var offset = 0;
var byteOffset = 0;
var minDecodeSize = 16384; // This is your chunk size
var request = new XMLHttpRequest();
request.onprogress = function(evt)
{
if (request.response)
{
var size = request.response.length - byteOffset;
if (size < minDecodeSize) return;
// In Chrome, XHR stream mode gives text, not ArrayBuffer.
// If in Firefox, you can get an ArrayBuffer as is
var buf;
if (request.response instanceof ArrayBuffer)
buf = request.response;
else
{
ab = new ArrayBuffer(size);
buf = new Uint8Array(ab);
for (var i = 0; i < size; i++)
buf[i] = request.response.charCodeAt(i + byteOffset) & 0xff;
}
byteOffset = request.response.length;
context.decodeAudioData(ab, function(buffer) {
playSound(buffer);
}, onError);
}
};
request.open('GET', url, true);
request.responseType = expectedType; // 'stream' in chrome, 'moz-chunked-arraybuffer' in firefox, 'ms-stream' in IE
request.overrideMimeType('text/plain; charset=x-user-defined');
request.send(null);
function playSound(buffer) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
source.start(offset); // play the source now
// note: on older systems, may have to use deprecated noteOn(time);
offset += buffer.duration;
}

Nodejs + Firefox behavior in relation to the HTML5 <audio> element

I am using Nodejs and Firefox to display a web page. That page has a HTML5 audio element. The problem I have is related to the calcul of the audio duration.
In my node js script I have:
var app = require('http').createServer(handler)
, path = require("path")
, io = require('socket.io').listen(app)
, fs = require('fs')
, exec = require('child_process').exec
, spawn = require ('child_process').spawn
, util = require('util')
, extensions = {
".html": "text/html",
".css": "text/css",
".js": "application/javascript",
".png": "image/png",
".gif": "image/gif",
".ttf": "application/x-font-ttf",
".jpg": "image/jpeg",
".mp3": "audio/mp3",
".wav": "audio/wav",
".ogg": "audio/ogg"
}
, Files = {};
In my html web page I have:
<audio id="idaudio" src="" type="audio/wav" >Your browser does not support the audio element.</audio>
And some javascript code from my web page:
var thissound=document.getElementById("idaudio");
thissound.src="http://localhost/Audio/Song.wav";
//thissound.src="/Audio/Song.wav";
thissound.addEventListener('loadeddata', function() {
var durationaudio = (thissound.duration)*1000;
});
When I check the durationaudio I get the right number and then I can play the song using thissound.play(); This code is working in Firefox and Chromium.
If I change
thissound.src="http://localhost/Audio/Song.wav" -> thissound.src="/Audio/Song.wav"
Adding the ".wav": "audio/wav" extension in the node script, I can play the Song using Firefox and Chromium; In Chromium I get also the right number of the durationaudio but using Firefox I get a durationaudio=Infinity. This is the problem. I dont know why Firefox is not able to get the right duration. Maybe I have to add some extension ... in the node script in order to allow Firefox to get the duration of the audio. Any ideas?

Audio API: Fail to resume music and also visualize it. Is there bug in html5-audio?

I have a button. Every time it is clicked, a music is played. When it's clicked the second time, the music resumes. I also want to visualize the music.
So i begin with html5 audio (complete code in http://jsfiddle.net/karenpeng/PAW7r/):
$("#1").click(function(){
audio1.src = '1.mp3';
audio1.controls = true;
audio1.autoplay = true;
audio1.loop = true;
source = context.createMediaElementSource(audio1);
source.connect(analyser);
analyser.connect(context.destination);
});
But when it's clicked more than once, it console.log error:
Uncaught Error: INVALID_STATE_ERR: DOM Exception 11
Then i change to use web audio API, and change the source to:
source = context.createBufferSource();
The error is gone.
And then, i need to visualize it.
But ironicly, it only works in html5 audio!
(complete code in http://jsfiddle.net/karenpeng/FvgQF/, it does not work in jsfiddle cuz i dont know how to write processing.js script properly, but it does run on my pc)
var audio = new Audio();
audio.src = '2.mp3';
audio.controls = true;
audio.autoplay = true;
audio.loop=true;
var context = new webkitAudioContext();
var analyser = context.createAnalyser();
var source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
var freqData = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(freqData);
//visualization using freqData
when i change the source to :
source = context.createBufferSource();
it does not show anything.
So is there way to visualize it and yet without error and enable it to resume again and again?
Actually, I believe the problem is that you're trying to create a SECOND web audio node for the same media element. (Your code, when clicked, re-sets the SRC, controls, etc., but it's not creating a new Audio().) You should either hang on to the MediaElementAudioSourceNode you created, or create new Audio elements.
E.g.:
var context = new webkitAudioContext();
var analyser = context.createAnalyser();
var source = null;
var audio0 = new Audio();
$("#0").click(function(){
audio0.src = 'http://www.bornemark.se/bb/mp3_demos/PoA_Sorlin_-_Stay_Up.mp3';
audio0.controls = true;
audio0.autoplay = true;
audio0.loop = true;
if (source==null) {
source = context.createMediaElementSource(audio0);
source.connect(analyser);
analyser.connect(context.destination);
}
});​
Hope that helps!
-Chris Wilson
From what I can tell, this is likely because the MediaElementSourceNode may only be able to take in an Audio that isn't already playing. The Audio object is declared outside of the click handler, so you're trying to analyze audio that's in the middle of playing the second time you click.
Note that the API doesn't seem to specify this, so I'm not 100% sure, but this makes intuitive sense.

Play audio stream with Audio API

I am working with the HTML5 audio api to play sound. This works fine with regular mp3 files but when using a sound stream such as http://95.173.167.24:8009, it fails to play.
Here is the code i'm using:
if('webkitAudioContext' in window) {
var myAudioContext = new webkitAudioContext();
}
request = new XMLHttpRequest();
request.open('GET', 'http://95.173.167.24:8009', true);
request.responseType = 'arraybuffer';
request.addEventListener('load', bufferSound, false);
request.send();
function bufferSound(event) {
var request = event.target;
var source = myAudioContext.createBufferSource();
source.buffer = myAudioContext.createBuffer(request.response, false);
source.connect(myAudioContext.destination);
source.noteOn(0);
}
Can anyone point me in the right direction on this?
Any help is appreciated.
Thanks
The problem is likely that SHOUTcast is detecting your User-Agent string as a browser. It looks for any string with Mozilla in it, and says "Oh, that's a browser! Send them the admin panel."
You need to force the usage of the audio stream. Fortunately, this is easily done by adding a semicolon at the end of your URL:
http://95.173.167.24:8009/;
Note that the User-Agent string in your logs will be MPEG OVERRIDE.
This will work for most browsers. Some browsers may still not like the HTTP-like resopnses that come from SHOUTcast, but this will at least get you started.