How does the Flash/AS3 player read the length of mp3 files? - actionscript-3

I have an AS3 music player built into an app that I'm putting together. It works perfectly with almost every file I've used, but there is one file that it stops early on. The file is roughly 56 seconds long, the player stops at about 44 seconds. I'm using trace to show the length, and for every other song the length is correct. In this case, trace shows roughly 44 seconds instead of 56. Here's the code I use to load the file:
length = 0;
request = new URLRequest(fileAddress);
track = new Sound();
track.load(request);
track.addEventListener(Event.COMPLETE, TrackLoaded);
And here's the TrackLoaded function:
private function TrackLoaded(e:Event):void{
length = track.length;
if (playWhenLoaded == true){
trackChannel = track.play(0);
trackChannel.addEventListener(Event.SOUND_COMPLETE, TrackFinishedPlaying);
playWhenLoaded = false;
}
Works perfectly with every other file. What am I missing?

Are you willing to host this 56 sec MP3 somewhere for download & analysis? Or if you can yourself check the header info via a Hexeditor. I suspect either of two things:
1) Header has incorrect time length embedded and Flash takes that as final duration and stops there. After all why read anymore remaining bytes? they could be just metadata not audio samples Besides what encoder would lie about true duration? So its accepted as final duration even if your ears know that its incorrect.
2) MP3 samplerate /bitrate issue: Consider what sample rate is this one problem MP3? Check the sample rate against of a working MP3. Also are these various found sounds or you made each yourself? I ask to confirm you put the same settings for each file yet this one does not work?
In anycase I think you could solve this particular MP3 by re-encoding it. Maybe save as WAV or AIFF first then take that new uncompressed audio and convert back to a MP3 with samplerate of 44100 khz + Stereo sound + Constant Bitrate (avoid Variable B.Rate like hell if you dont want issues)
Checking and fixing either of the above should get you are correctly parsed MP3. Hope it helps

Related

Updated(reproducible) - Gaps when recording using MediaRecorder API(audio/webm opus)

----- UPDATE HAS BEEN ADDED BELOW -----
I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I'm using it to record the speech from the web page(Chrome was used in this case) and save it as chunks.
I need to be able to play it while and after it is recorded, so it's important to keep those chunks.
Here is the code which is recording data:
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' })
recorder.ondataavailable = function(e) {
// Read blob from `e.data`, decode64 and send to sever;
}
recorder.start(1000)
})
The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely)!. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.
For example. I'm trying to convert a file which has duration 00:36:27.78 to wav, but I get a file with duration 00:36:26.04, which is 1.74s less.
At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.
After some research, I found out that it also does not play correctly with the browser's MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks:
In a case when I just merge all the parts into a single blob - it works fine.
In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspecting buffered property).
697.196 - 697.528 (~330ms)
996.198 - 996.754 (~550ms)
1597.16 - 1597.531 (~370ms)
1896.893 - 1897.183 (~290ms)
Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it's customer's private data and I was not able to reproduce such issue on different media yet.
What can be the cause for such an issue?
----- UPDATE -----
I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/
In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps:
0 - 136.349, 141.388 - 195.439, 197.57 - 198.589
136.349 - 141.388
195.439 - 197.57
So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.
Thank you
It's 7 months later so I guess you resolved this, but in case not...
When we started working with the MediaRecorder we were having a few issues, including recordings disappearing (Maybe going over a RAM quota and then the arrays were deallocated or something like that)
What solved all our issues was to immediately put each chunk into an indexdb objectStore so it is saved to disk, and at the end of the recording, build all those chunks into a blob and download. No further working with the chunks, only the complete file.
I know this doesn't answer your question but maybe it helps.

Using URLStream to load part of a file

When loading a file using a URLStream (or a URLLoader) is there any way to specify a range of bytes to load instead of loading the entire file in to memory?
This is kind of a broad question because of nuances. The short answer is No, loading URLs will give you the complete resource.
The longer answer:
URLLoader - No; it loads the complete resource. You do get progress events which tell you how much of the file has been loaded.
URLStream - Maybe; it makes data available in chunks as it is loaded. You can close the stream before it finishes downloading if the data you care about is at the beginning of the file. Note that the data is in raw binary form.
Revisit URLLoader - Maybe; you could write a server that takes a beginIndex and an endIndex, then call URLLoader.load('http://my-server/file?beginIndex=' + desiredBeginIndex + '&endIndex=' + desiredEndIndex);
You'll end up with only the portion of the file that you care about if you have a way of knowing which indices to specify.
...mmm... You could load and track the number of bytes. Then, when you've reached your desired number of bytes loaded, do a loader.close()... (Tracking and saving them with a progress event, a byte array, and a global number var)
I dont know about getting data from the middle though, but you can load any range of bytes from the beginning using that method.
Loader.close()
Closes the load operation in progress. Any load operation in progress is immediately terminated. If no URL is currently being streamed, an invalid stream error is thrown.

How to play a seamless loop from an AudioSprite in AS3, without SAMPLE_DATA

I created a batch of sounds assembled with this tool:
AudioSprite
https://github.com/tonistiigi/audiosprite
The output is generally used for JS libraries, such as Howler, Zynga Jukebox, or SoundJS - but I wanted to see if it's possible to implement in AS3.
I started creating a Sound player that can load, parse and play the sounds based on the JSON and MP3 file this tool generates.
So far so good! ... except for loops.
Now, the big question is - is there a way to play a Sound-loop seamlessly given that all music & sounds coexist in the same MP3 file, and it has a start & end range to play and stop it?
Example of how the sounds are placed in the file:
mygame_sounds.mp3 = [BUZZ + LASER + BOING ... + TRACKLOOP]
I'm looking for a solution that does not involve using the SAMPLE_DATA Event (given it eats up a lot of CPU usage). If there's no way around it, please explain why.
So far I've had mild success using flash.utils.Timer objects triggered after a given AudioSprite's duration, but it's not consistent.
To stop / dispose of a non-looping sound, I rely on a Master Timer (running at very short intervals) and that seems to "cut" the sample appropriately. But I already tried using this Master Timer to play a looped-sound over and over - same latency issues.
Is there any method to predict / measure how much latency is to be expected by the time the sound completes one pass?
In SoundJS we could not find a way to allow smooth looping of audio sprites in AS3 and went with a timer. We found Web Audio was the only api that allowed smooth looping, and therefore recommended staying away from audiosprites for sounds that needed to loop smoothly if any other plugin might be used.
Hope that helps.
The reason of why you can't get smooth loops of a track retrieved from a larger audio file is that you cannot check sound position faster than once per SWF frame, which length depends on stage.frameRate and total processing time of your application and is generally varied. So, if your looping sounds lasts say 5.123 seconds (I don't care how many samples, just that its length does not make a full number of frames regardless of stage.frameRate), your sound will attempt to play for either 5.125 seconds (205 frames at 40 fps, IMO best bet for this particular sound), 5.133 seconds (154 frames at 30 fps) or some weird number of frames if the SWF would experience lag. The excess milliseconds cannot be totally controlled by any means due to AS3/Flash engine optimization. So, consider not using audio sprites and shift into audio packs (several audio files in an SWF, or one sound in an MP3).
Although I'm still working on the perfect solution, this is the best I could come up with:
Load the JSON file / ByteArray.
Parse the JSON file to obtain each sprites' ID, start and end times.
Load the MP3 file / ByteArray (requires loadCompressedDataFromByteArray()) into a master Sound object.
Once loaded, check if any sprites are marked as "loops".
Create separate Sound objects for the above loops, and extract the portion from the master Sound via loadPCMFromByteArray() with some "magic-numbers" (details below).
To play a one-shot sound, call the master Sound's play(sprite.start * 1000) (depending on the format, usually the JSON's start values are in seconds, needs to be in milliseconds).
To play a seamless-loop sound, call the individual Sound object's (created in step #5) play(0, 9999) method.
I won't go too deep in details on how to stop the sounds (SoundChannel.stop(), bam!), but I'll explain the "magic-numbers" mentioned above. See this snippet:
var goldenOffset:UInt = (64 << 5);
var goldenDuration:UInt = (64 << 2);
var sampleRate:UInt = 44100;
for (id in loops) {
var sprite:AudioSpriteItem = _mapSprites.get(id);
var loop:Sound = _mapLoops.get(id);
var sampleBytes = new ByteArray();
var samplesTotal:UInt = cast(sprite.duration * sampleRate + goldenDuration);
var samplesStart:UInt = cast(sprite.start * sampleRate + goldenOffset);
_sound.extract(sampleBytes, samplesTotal, samplesStart);
sampleBytes.position = 0;
loop.loadPCMFromByteArray(sampleBytes, samplesTotal, "float", true);
}
Quite honestly, these magic goldenOffset and goldenDuration values were just found via Trial-and-Error. I could get close to a seamless loop without them by just calculating the start and duration with the sampleRate (assuming 44100 by default), but each endings had a bit of a hiccup to it.
After several adjustments, those couple "64 left bit-shifted" values made the loops sound smoother.
I posted the Haxe project on github (compiled SWC also available in /bin folder) if you wish to try it / read through the code.
FLAudioSprite
Github page: https://github.com/bigp/FLAudioSprite
SWF Demo (Download): bit.ly/FLAudioSpriteSWFDemo

How to split an audio file in ActionScript

I want to split a mp3 file into several pieces e.g. from 0:30 to 0:45 and from 0:45 to 1:00 in ActionScript.
The closest thing I've found is this short snippet from documentation. I cannot find any articles or libraries that would help me with this. Is it even possible to achieve?
Thanks in advance for any tips.
Example of what I want: mp3cut
I haven't done this myself, but the way I would go about it follows this process:
Load the MP3 file
Decode the MP3 file
Extract the bits you need
Encode the extracted data
All of the below is to be considered untested pseudo code.
Step 1: Load the MP3 file
I assume that you already have a way to load the compressed MP3 file into memory. If not, there is plenty of information about this step to be found.
Step 2: decode the MP3 file
The Sound class exposes the method loadCompressedDataFromByteArray (docs) which takes the compressed MP3 file as a ByteArray and decodes it to raw sound data.
Example:
var mySound:Sound = new Sound();
mySound.loadCompressedDataFromByteArray(myMP3Data, myMP3Data.length);
Step 3: Extract a part of the sound
Using the extract (docs) method (as described in the link in your question) you can extract raw sound data from a Sound object. You pass in a ByteArray object to be populated with the data, specify where you want the "cut" to be, and the length of the clip to extract.
The data is stored using 44100 samples per second (left + right channel), and the calculations below is based on this. I might be wrong about this, so if it doesn't work as expected, please look this up further.
Example:
var sampleFrequency:int = 44100;
var startTimeInSeconds:Number = 30.0;
var lengthInSeconds:Number = 15.0;
// Calculate the position to start the clip (in samples)
var startPosition:Number = Math.round(startTimeInSeconds * sampleFrequency);
// Calculate the number of samples to extract
var samplesLength:Number = Math.round(lengthInSeconds * sampleFrequency);
var extractedBytes:ByteArray = new ByteArray();
mySound.extract(extractedBytes, samplesLength, startPosition);
Step 4: Encode the extracted sound back to MP3
Now that you have the sound data as a ByteArray, theres a few way to encode it back to the MP3 format. This previous StackOverflow answer mentions this library, but there might be other libraries out there better suited for your tasks.

How to load a ByteArray FLV in OSMF?

I'm working on a local application ( it's not a website or nothing related ) and I have various FLVs with a very simple encryptation method by now (just like adding 10 at each byte).
I can load/play them using NetStream.appendBytes() after decrypting, but that happens only after I read all video data it's not streamed.
What I really need is to stream those videos from a remote url, and decrypting while receiving data, using a OSMF based player that I already have built.
I'm lost on how OSMF deals with FLV, otherwise, I would try to create a plugin or something like.
I'd be very thankful if someone point me how to deal with that.
But I'd be happy if someone help me to find a way to load a local file using OSMF, passing a ByteArray value, instead of a url (below). Or even giving me directions to create a OSMF plugin to solve my problem.
videoElement.resource = "video_url/video.flv";
This is my current code just to play my decoded FLV byte array
private function playBytes(bytes:ByteArray):void
{
// detecting it's header
if (bytes.readUTFBytes(3) != "FLV")
{
_text.appendText("\nFile \""+ file +"\" is not a FLV")
return void;
}
bytes.position = 0;
netConnection.connect(null);
netStream = new NetStream(netConnection);
netStream.client = { onMetaData:function(obj:Object):void { } }
video.attachNetStream(netStream);
addChild(video);
// put the NetStream class into Data Generation mode
netStream.play(null);
// before appending new bytes, reset the position to the beginning
netStream.appendBytesAction(NetStreamAppendBytesAction.RESET_BEGIN);
// append the FLV video bytes
netStream.appendBytes(bytes);
}
Interesting post, I'd be interested to see the answer. Looking at something similar myself, though not with a stream, I came across the following.
http://ntt.cc/2008/07/15/bitsreader-read-bits-from-given-bytearray.html
After passing the byte array you can use bits.read(8) of a 10 bit array. Perhaps this would send you down the correct path? Otherwise, I'm thinking you'd need to break it apart and essentially do smaller sections to buffer in order to concatenate all the buffered data...
Just a thought,