Detect gaps in audio file without playing - actionscript-3

By reading the byte array of the audio file, is there any way to find all the gaps before playing it? I am using ActionScript 3.
What I've tried:
When sound is playing, I can correctly find a gap by using this code below. Where the SoundMixer.computeSpectrum(byteArr); fills byteArr with the computed FFT. See reference.
isGap = true;
SoundMixer.computeSpectrum(byteArr);
for (var i:uint=0; i<256; i++)
{
var num:Number = byteArr.readFloat();
if (num > 0.005)
{
isGap = false;
break;
}
}

Related

Actionscript 3.0 platformer game

I have an flash game with the following code (http://pastie.org/9248528)
When I run it, the player just falls and doesn't stop when he hits a platform.
I tried debugging it and I had an error with moveCharacter's timer, but I don't know if that is the main problem.
I put the player inside the wall and debugged it using breakpoints and it didn't detect that the player was inside the wall, skipping moving it to outside of the wall.
Anyone has any ideas on what is wrong with my code?
The problem is in this code:
// Check if character falls off any platform
for (var i:int = 0; i < platform.length; i++) {
if (player.x < platform[i].x || player.x > platform[i].x + platform[i].width) {
onPlatform = false;
}
}
Since the player cannot simultaneously be on every platform at once, his x position is pretty much guaranteed to be out of bounds of at least 1 platform, which will set onPlatform to false. Instead you would need to keep a reference to which platform the player is on, like so:
var lastPlatform:Sprite; //holds reference to last platform player was on
// Function to move character
function moveCharacter(evt:TimerEvent):void {
....
// Check if character falls off the platform he was last on
if (lastPlatform != null && (player.x < lastPlatform.x || player.x > lastPlatform.x + lastPlatform.width)) {
onPlatform = false;
}
}
function detectCollisions():void {
// Check for collisions with platforms
for (var p:int = 0; p < platform.length; p++) {
// Adjust character to platform level if with landing depth of the platform
if (!onPlatform && player.hitTestObject(platform[p]) && lastPosY < platform[p].y) {
lastPlatform = platform[p]; //save reference
player.y = platform[p].y;
jumping = false;
onPlatform = true;
dy = 0;
// Prevent character from dropping sideways into platforms
} else if (!onPlatform && player.hitTestObject(platform[p])) {
player.x = lastPosX;
}
}
.....
}
This should work better, though it is still not the most object-oriented way to do this. Hope this helps!

AS3: Capturing compressed stream from microphone

Now I have a code like this:
soundData = new ByteArray();
microphone = Microphone.getMicrophone();
microphone.codec = SoundCodec.SPEEX;
microphone.rate = 8;
microphone.gain = 100;
microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);
function micSampleDataHandler(event:SampleDataEvent):void {
while (event.data.bytesAvailable) {
var sample:Number = event.data.readFloat();
soundData.writeFloat(sample);
}
}
The raw data is recorded from the microphone. How do I go about casting it to a ByteArray after using SPEEX codec compression? Note that the converted data must play back.
refer a this code.
soundData.position=0;
var soundOutput:Sound = new Sound();
soundOutput.addEventListener(SampleDataEvent.SAMPLE_DATA, playSound);
soundOutput.play();
function playSound(soundOutput:SampleDataEvent):void {
if (! soundData.bytesAvailable>0)
{
return;
}
for (var i:int = 0; i < 8192; i++)
{
var sample:Number=0;
if (soundData.bytesAvailable>0)
{
sample=soundData.readFloat();
}
soundOutput.data.writeFloat(sample);
soundOutput.data.writeFloat(sample);
}
}
using a SoundCodec.SPEEX above code playrate not is 1x you should correct playSound function. maybe you tested. if you remove microphone.codec = SoundCodec.SPEEX; know.
More information: Adobe Official Capturing sound input
have a some problem when recorded in speex.
refer a follow artice.
http://forums.adobe.com/message/3571251#3571251
http://forums.adobe.com/message/3584747
If the SoundFormat indicates Speex, the audio is compressed mono sampled at 16 kHz. In flash, a sound object plays at 44khz. Since you're sampling at 16khz(Speex), you're sending data through the SampleDataEvent Event handler 2.75 faster then you are getting that data.
so, you must changed the playSound for(or while) loop.
I recommend following site. this article is 'how to playrate adjust?' great tutorial.
http://www.kelvinluck.com/2008/11/first-steps-with-flash-10-audio-programming/

seeking not working in flex 4.5 netStream byteArray

I am trying to play a flv video file in flex 4.5 with netStream byteArray. What I am doing is below:
Creating a netStream and video object
Attaching a netStream with video
Reading flv file in byteArray
Append byteArray in netStream using "appendBytes" method
Playing video
In this scenario Play, Pause, Stop functionalities are working fine with video.
But when I am trying to seeking in video then it is not working.
You can follow the code what I am doing by clicking on the link http://pastebin.com/fZp0mKDs
Can anybody tell me, where am I am going wrong to implement seeking.
Any code sample or any kind of help would be appreciated.
I got, the code below worked in my case
// onmetadata function get all timestamp and corresponding fileposition..
function onMetaData(informationObject:Object):void
{
for (var propertyName:String in informationObject)
{
if (propertyName == "keyframes")
{
var kfObject:Object = informationObject[propertyName];
var timeArray:Array = kfObject["times"];
var filePositionArray:Array = kfObject["filepositions"];
for(var i:int=0;i<timeArray.length;i++)
{
var tagPosition:int = filePositionArray[i];//Read the tag size;
var timestamp:Number = timeArray[i];//read the timestamp;
tags.push({timestamp:timestamp,tagPosition:tagPosition});
}
}
}
}
// onseek click get approximate timestamp and its fileposition
protected function seek_click(seektime:Number):void
{
var currentTime:Number = 0;
var previousTime:Number = 0;
for (var i:int=1; i<tags.length; i++)
{
currentTime = tags[i].timestamp;
previousTime = tags[i-1].timestamp;
if(previousTime < seektime)
{
if(seektime < currentTime)
{
seekPos = tags[i-1].tagPosition;
stream.seek(previousTime);
break;
}
}
}
}
// append bytes on seekposition
private function netStatusHandler(event:NetStatusEvent):void
{
switch (event.info.code)
{
case "NetStream.Seek.Notify" :
stream.appendBytesAction(NetStreamAppendBytesAction.RESET_SEEK);
totalfilePositionArray.position = seekPos;
var bytes:filePositionArray = new filePositionArray();
totalfilePositionArray.readBytes(bytes);
stream.appendBytes(bytes);
stream.resume();
break;
}
}
For inject MetaData keyframes into flv file.Use some injector tool, fe. FLV MetaData Injector
http://www.buraks.com/flvmdi/
I think there is a problem in seeking of byteArray constructed after reading file. Just play you netStream directly, it works:
var fileName:String = "dummy-video.flv";
ns.play(fileName);

Something like MozAudioAvailable with Webkit's audio API?

I have been experimenting with Firefox's Audio API to detecting silence in audio. (The point is to enable semi-automated transcription.)
Surprisingly, this simple code more or less suffices to detect silence and pause:
var audio = document.getElementsByTagName("audio")[0];
audio.addEventListener("MozAudioAvailable", pauseOnSilence, false);
function pauseOnSilence(event) {
var val = event.frameBuffer[0];
if (Math.abs(val) < .0001){
audio.pause();
}
}
It's imperfect but as a proof of concept, I'm convinced.
My question now is, is there way to do the same thing in Webkit's Audio API? From what I've seen of it it's more oriented toward synthesize than sound processing (but perhaps I'm wrong?).
(I wish the Webkit team would just implement the same interface that Mozilla has created, and then move on to their fancier stuff...)
You should be able to do something like this using an AnalyzerNode, or perhaps looking for thresholding using a JavaScriptAudioNode.
For example:
meter.onaudioprocess = function(e) {
var buffer = e.inputBuffer.getChannelData(0); // Left buffer only.
// TODO: Do the same for right.
var isClipping = false;
// Iterate through buffer to check if any of the |values| exceeds 1.
for (var i = 0; i < buffer.length; i++) {
var absValue = Math.abs(buffer[i]);
if (absValue >= 1) {
isClipping = true;
break;
}
}
this.isClipping = isClipping;
if (isClipping) {
this.lastClipTime = new Date();
}
};
Rather than clipping, you can simply check for low enough levels.
Roughly adapted from this tutorial. Specific sample is here.

Low-latency audio streaming in Flash

Suppose there is a live WAV stream that can be reached at a certain URL, and we need to stream it with as little latency as possible. Using HTML5 <audio> for this task is a no-go, because browsers attempt to pre-buffer several seconds of the stream, and the latency goes up accordingly. That's the reason behind using Flash for this task. However, due to my inexperience with this technology, I only managed to get occasional clicks and white noise. What's wrong in the code below? Thanks.
var soundBuffer: ByteArray = new ByteArray();
var soundStream: URLStream = new URLStream();
soundStream.addEventListener(ProgressEvent.PROGRESS, readSound);
soundStream.load(new URLRequest(WAV_FILE_URL));
var sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA,playSound);
sound.play();
function readSound(event:ProgressEvent):void {
soundStream.readBytes(soundBuffer, 0, soundStream.bytesAvailable);
}
function playSound(event:SampleDataEvent):void {
/* The docs say that if we send too few samples,
Sound will consider it an EOF */
var samples:int = (soundBuffer.length - soundBuffer.position) / 4
var toadd:int = 4096 - samples;
try {
for (var c: int=0; c < samples; c++) {
var n:Number = soundBuffer.readFloat();
event.data.writeFloat(n);
event.data.writeFloat(n);
}
} catch(e:Error) {
ExternalInterface.call("errorReport", e.message);
}
for (var d: int = 0; d < toadd; d++) {
event.data.writeFloat(0);
event.data.writeFloat(0);
}
}
Like The_asMan pointed out, playing a wav file is not that easy. See as3wavsound for an example.
If your goal is low latency, the best option would be to convert to MP3, so you can use just use a SoundLoaderContext.