Something like MozAudioAvailable with Webkit's audio API? - html

I have been experimenting with Firefox's Audio API to detecting silence in audio. (The point is to enable semi-automated transcription.)
Surprisingly, this simple code more or less suffices to detect silence and pause:
var audio = document.getElementsByTagName("audio")[0];
audio.addEventListener("MozAudioAvailable", pauseOnSilence, false);
function pauseOnSilence(event) {
var val = event.frameBuffer[0];
if (Math.abs(val) < .0001){
audio.pause();
}
}
It's imperfect but as a proof of concept, I'm convinced.
My question now is, is there way to do the same thing in Webkit's Audio API? From what I've seen of it it's more oriented toward synthesize than sound processing (but perhaps I'm wrong?).
(I wish the Webkit team would just implement the same interface that Mozilla has created, and then move on to their fancier stuff...)

You should be able to do something like this using an AnalyzerNode, or perhaps looking for thresholding using a JavaScriptAudioNode.
For example:
meter.onaudioprocess = function(e) {
var buffer = e.inputBuffer.getChannelData(0); // Left buffer only.
// TODO: Do the same for right.
var isClipping = false;
// Iterate through buffer to check if any of the |values| exceeds 1.
for (var i = 0; i < buffer.length; i++) {
var absValue = Math.abs(buffer[i]);
if (absValue >= 1) {
isClipping = true;
break;
}
}
this.isClipping = isClipping;
if (isClipping) {
this.lastClipTime = new Date();
}
};
Rather than clipping, you can simply check for low enough levels.
Roughly adapted from this tutorial. Specific sample is here.

Related

how to use method changeToMediaAtIndex tvmlkit js

how to use the method changeToMediaAtIndex in the tvmlkit js to change video in the playlist?
When i use this method with the player class (player.changeToMediaAtIndex(2)) nothing happened...
I was having issues with this too. My code looked like this.
var player = new Player();
player.playlist = new Playlist();
this.currentMediaItemIndex = 0;
for (var i = 0; i < videoItemsList.length; ++i) {
var video = new MediaItem('video', videoItemsList[i].videoUrl.valueOf());
video.artworkImageURL = videoItemsList[i].artworkUrl;
video.title = videoItemsList[i].title;
video.resumeTime = videoItemsList[i].watchedSeconds;
if (videoItemList[i].videoId == this.startingVideoId) {
this.currentMediaItemIndex = i;
}
player.playlist.push(video);
}
player.play();
player.changeToMediaAtIndex(this.currentMediaItemIndex);
I would call play and immediately call to change the media item. I tried different ordering (calling play after changing the item) and I noticed that when I stepped through the code, it would work as intended. Note that this.startingVideoId is set on the controller by an action before calling the play function that contains the above code. videoItemsList is also defined above and has Object's with the video information.
I decided to add an event listener and wait until I received an event from the player before trying to change the currentItem. I removed player.changeToMediaAtIndex(this.currentMediaItemIndex); from the play function and moved it into the event listener.
I added the event listener into the play function when the player is created like so.
player.addEventListener("stateDidChange", this.stateDidChange);
Then I implemented the event listener to check the playlist and see if it is playing the right video when it goes into a playing state.
private stateDidChange = (event) => {
if (event.state == "playing") {
var player = <Player>event.target;
for (var i = 0; i < player.playlist.length; ++i) {
if (player.playlist.item(i) == player.currentMediaItem) {
if (i != this.currentMediaItemIndex) {
player.changeToMediaAtIndex(this.currentMediaItemIndex);
}
break; // break out of the loop since we don't care about the rest.
}
}
}
};
This works great now and always starts from the right location. You will need additional state tracking with event listeners, like changing the currentMediaItemIndex when the track changes, but this should get you on the right track I hope.
There still isn't a lot of good tvos answers on here yet. Hopefully this will help.
Happy programming.

Is there a way to detect audio frequency in HTML 5 web audio API?

I would like to know is there a way we can detect audio frequency from microphone in html 5 web audio. I wish to make an online guitar tuner, and I need to have the audio frequency in hertz, from the sound input. I've seen some EQ and filters effects, but I didn't see anything about frequency recognition.
EDIT:
I found this: http://www.smartjava.org/content/exploring-html5-web-audio-visualizing-sound
The 2nd point (analyser node) is really interesting. I seen his source code, but I can't figure how to connect the analyser to the microphone input.
He calls a playSound() function when the mp3 file starts to play, and there he draws his canvas. But I do not have a playSound() like function...
I wrote a web audio library which, among other things, can detect frequency from mic input. Check it out at https://github.com/rserota/wad#pitch-detection
var voice = new Wad({source : 'mic' });
var tuner = new Wad.Poly();
tuner.add(voice);
voice.play();
tuner.updatePitch() // The tuner is now calculating the pitch and note name of its input 60 times per second. These values are stored in tuner.pitch and tuner.noteName.
var logPitch = function(){
console.log(tuner.pitch, tuner.noteName)
requestAnimationFrame(logPitch)
};
logPitch();
// If you sing into your microphone, your pitch will be logged to the console in real time.
tuner.stopUpdatingPitch(); // Stop calculating the pitch if you don't need to know it anymore.
You should be able to use BiquadFilterNode.
Example code from the link:
var audioCtx = new AudioContext();
var biquadFilter = audioCtx.createBiquadFilter();
biquadfilter.getFrequencyResponse(myFrequencyArray,magResponseOutput,phaseResponseOutput);
You can use the following code to get the frequencies from the mic.
navigator.mediaDevices.getUserMedia({audio:true}).then(function(localStream){
var audioContext = new(window.AudioContext || window.webkitAudioContext)();
var input = audioContext.createMediaStreamSource(localStream);
var analyser = audioContext.createAnalyser();
var scriptProcessor = audioContext.createScriptProcessor();
// Some analyser setup
analyser.smoothingTimeConstant = 0;
analyser.fftSize = 64;
input.connect(analyser);
analyser.connect(scriptProcessor);
scriptProcessor.connect(audioContext.destination);
var getAverageVolume = function( array){
var length = array.length;
var values = 0;
var i = 0;
for (; i < length; i++) {
values += array[i];
}
return values / length;
};
var onAudio = function(){
var tempArray = new window.Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(tempArray);
var latestFrequency = (getAverageVolume(tempArray));
//use latestFrequency
};
scriptProcessor.onaudioprocess = onAudio;
})
.catch(function(){
//Handle error
});

Filling in my own Web Audio Buffer is not working

I'm using Web Audio for various purposes and while samples loaded via URL and oscillators are working and playing properly, building a custom source buffer is not. I have tried to load my own AudioBuffer into an AudioBufferSourceNode using the code below and through the Chrome-NetBeans debugger I can see that it's loading the buffer with data and no errors are flagged, but when start is called, no sound is ever produced. Note that I'm just filling the buffer with noise, but I plan to fill it with my own custom wave data. I realize it's likely that I'm filling the buffer with the wrong data type, but I have been unable to find any documentation or examples regarding the proper way of doing it. Any help would be appreciated.
var audioContext = new (window.AudioContext || window.webkitAudioContext)();
var frameCount = 2000;
var sampleRate = 4000;
var myBuffer = audioContext.createBuffer(2, frameCount, sampleRate);
// FILL WITH WHITE NOISE
for (var i = 0; i < frameCount; i++) {
myBuffer[i] = Math.random() * 2 - 1;
}
sourceNode = audioContext.createBufferSource();
sourceNode.buffer = myBuffer;
sourceNode.connect(audioContext.destination);
sourceNode.start(0);
This will synth your noise inside a callback method which is called everytime you have rendered yet another BUFF_SIZE number of samples
var BUFF_SIZE = 2048; // spec allows, yet do not go below 1024
var audio_context = new AudioContext();
var gain_node = audio_context.createGain();
gain_node.connect( audio_context.destination );
var source_node = audio_context.createScriptProcessor(BUFF_SIZE, 1, 1);
source_node.onaudioprocess = (function() {
return function(event) {
var synth_buff = event.outputBuffer.getChannelData(0); // mono for now
// FILL WITH WHITE NOISE
for (var i = 0, buff_size = synth_buff.length; i < buff_size; i++) {
synth_buff[i] = Math.random() * 2 - 1;
}
};
}());
source_node.connect(gain_node);

AS3: Capturing compressed stream from microphone

Now I have a code like this:
soundData = new ByteArray();
microphone = Microphone.getMicrophone();
microphone.codec = SoundCodec.SPEEX;
microphone.rate = 8;
microphone.gain = 100;
microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);
function micSampleDataHandler(event:SampleDataEvent):void {
while (event.data.bytesAvailable) {
var sample:Number = event.data.readFloat();
soundData.writeFloat(sample);
}
}
The raw data is recorded from the microphone. How do I go about casting it to a ByteArray after using SPEEX codec compression? Note that the converted data must play back.
refer a this code.
soundData.position=0;
var soundOutput:Sound = new Sound();
soundOutput.addEventListener(SampleDataEvent.SAMPLE_DATA, playSound);
soundOutput.play();
function playSound(soundOutput:SampleDataEvent):void {
if (! soundData.bytesAvailable>0)
{
return;
}
for (var i:int = 0; i < 8192; i++)
{
var sample:Number=0;
if (soundData.bytesAvailable>0)
{
sample=soundData.readFloat();
}
soundOutput.data.writeFloat(sample);
soundOutput.data.writeFloat(sample);
}
}
using a SoundCodec.SPEEX above code playrate not is 1x you should correct playSound function. maybe you tested. if you remove microphone.codec = SoundCodec.SPEEX; know.
More information: Adobe Official Capturing sound input
have a some problem when recorded in speex.
refer a follow artice.
http://forums.adobe.com/message/3571251#3571251
http://forums.adobe.com/message/3584747
If the SoundFormat indicates Speex, the audio is compressed mono sampled at 16 kHz. In flash, a sound object plays at 44khz. Since you're sampling at 16khz(Speex), you're sending data through the SampleDataEvent Event handler 2.75 faster then you are getting that data.
so, you must changed the playSound for(or while) loop.
I recommend following site. this article is 'how to playrate adjust?' great tutorial.
http://www.kelvinluck.com/2008/11/first-steps-with-flash-10-audio-programming/

Low-latency audio streaming in Flash

Suppose there is a live WAV stream that can be reached at a certain URL, and we need to stream it with as little latency as possible. Using HTML5 <audio> for this task is a no-go, because browsers attempt to pre-buffer several seconds of the stream, and the latency goes up accordingly. That's the reason behind using Flash for this task. However, due to my inexperience with this technology, I only managed to get occasional clicks and white noise. What's wrong in the code below? Thanks.
var soundBuffer: ByteArray = new ByteArray();
var soundStream: URLStream = new URLStream();
soundStream.addEventListener(ProgressEvent.PROGRESS, readSound);
soundStream.load(new URLRequest(WAV_FILE_URL));
var sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA,playSound);
sound.play();
function readSound(event:ProgressEvent):void {
soundStream.readBytes(soundBuffer, 0, soundStream.bytesAvailable);
}
function playSound(event:SampleDataEvent):void {
/* The docs say that if we send too few samples,
Sound will consider it an EOF */
var samples:int = (soundBuffer.length - soundBuffer.position) / 4
var toadd:int = 4096 - samples;
try {
for (var c: int=0; c < samples; c++) {
var n:Number = soundBuffer.readFloat();
event.data.writeFloat(n);
event.data.writeFloat(n);
}
} catch(e:Error) {
ExternalInterface.call("errorReport", e.message);
}
for (var d: int = 0; d < toadd; d++) {
event.data.writeFloat(0);
event.data.writeFloat(0);
}
}
Like The_asMan pointed out, playing a wav file is not that easy. See as3wavsound for an example.
If your goal is low latency, the best option would be to convert to MP3, so you can use just use a SoundLoaderContext.