I'm pretty much an actionscript novice and I'm trying just slice the first and last X bytes out of a byte array in as3, and can't seem to find anything anywhere on how to do that.
If it matters, the byte array is a set of floats recorded from a microphone that I'm trying to cut the first and last 1/4 of a second off of before it's encoded as a .wav file.
Assuming you an existing ByteArray, let's call it rawBytes:
var trimmedBytes:ByteArray = new ByteArray();
var quarterSecond:int = 1000; // no. bytes per 1/4 second (arbitrary estimate)
rawBytes.readBytes(trimmedBytes, quarterSecond, rawBytes.length - quarterSecond * 2);
Your trimmedBytes variable will now be populated with the second recording minus the first and last quarter second - assuming that quarterSecond variable has the right value. I don't know what that value should be, I'd imagine it would depend on the bitrate at which you're recording. You could probably get there via trial and error though!
Related
I am using an endless while loop to convert byte data (used in single chunks), to an integer value (in order to 'manipulate' those values and then reconvert the values back to bytes again and write the bytes to the PyAudio stream (for sound).
Everything plays smoothly until I write a complex function that takes up too much processing time. Then, I hear a bunch of pops, snaps and clicks over the audio. I notice that the reason for this happening, is because (between the time that the program loops to play the next chunk of streamed data provided to the PyAudio stream), there is a 'transition' of silence as there is a 'wait' for the loop to repeat... and that is what creates the pops between each chunk being played if the loop is 'too slow'.
Is there a way to make the 'voltage' going to the speakers at 'constant' based on the last data value provided to the PyAudio's stream? This would be a great way to smoothen out the 'pops', 'snaps' and 'clicks' when playing sound, instead of there being silence until the next value is passed to the stream. The reason I don't use a chunk greater than 1, is because I want to do many 'creative' things with PyAudio (through an endless loop), and have values inside the loop determine the 'voltage level' going to the speakers.
I'm trying to create a turn-based RPG where the player characters and the enemy characters each possess a speed stat. Using this stat, I would like to create an on-screen display of the next, say, 6 people in the queue to take their turn.
My issue is that I can't figure out how to turn the speed stat of each character into a useable number to determine turn order.
For example:
char1.speed = 10;
char2.speed = 20;
char3.speed = 80;
In a situation like this, I would like to be able to create a turn queue such that char3 takes two or three turns ahead of the other characters, since his character is significantly faster than the others. So the on-screen display would show portraits of char3, char3, char2, char3, char1, char3, for example. (I can make the queue display and make it re-sort itself; my struggle is making a changeable turn order that is based on a character's speed stat.)
Another issue that I'm struggling with is that I want to be able to modify a character's speed by spells, potions, etc that may end up changing the turn order mid-battle. I anticipate having an updateTurns() function which will re-sort my queue when this happens... is the best way to go about this giving each character two speed stats, baseSpeed and adjSpeed, for example? So that the baseSpeed remains the same no matter what happens through spells and items, while the adjSpeed represents a character's speed at that particular moment in battle?
Thanks for the help, and hopefully I've made sense. This is my first time posting here, so if I need any more clarification or whatnot, just let me know.
Should be relatively straight forward. First you need your divisor, i.e, how to determine what a single turn is. I assume 10? So get how many turns each character gets, set up a constant with the single turn speed in your character base class;
public static const TURN:uint = 10;
Then you can do something like this to get each players' turns;
char2.speed / character.TURN = // how many turns each player gets.
Then you can have a main loop, which is an array of your characters, and a sub loop, which loops through each character, removing a turn each time, and adding the char to the queue each time. Once turn = 0. The next character will be iterated by the main loop. Once you have a queue, you could shuffle it afterwards to change the order up a bit. Break it into two tasks.
Once you have turnsfor each character, you could deduct some turns, so also store a speedPenalty in each char which is normally 0, but if hit by a spell, change it to x. Then your main forumula is actually;
(char2.speed / character.TURN) - speedPenalty
If you do this, you'll have to make sure each char can never go below 1 turn. Or, as you say, have a base speed, and a current speed, and then deduct from current speed and use that to calculate turns, and reset it to base speed once the spell wears off.
I Have a project using a 240 bit Octal data format that will be coming in the serial port of Arduino uno at 2.4K RS232 converted to TTL.
The 240 bits along with other things has range, azimuth and elevation words, which is what I need to display.
The frame starts with a frame sync code wich is an alternating binary 7 bit code which is:
1110010 for frame 1 and
0001101 for frame 2 and so on.
I was thinking that I might use something like val = serial.read command like
if (val = 1110010 or 0001101) { data++val; }`
that will let me validate the start of my sting.
The rest of the 240 bit octal frame (all numbers) can be serial read to a string of which only parts will be needed to be printed to the screen.
Past the frame sync, all octal data is serial with no Nulls or delimiters so I am thinking
printf("%.Xs",stringname[xx]);
will let me off set the characters as needed so they can be parsed out.
How do I tell the program that the frame sync its looking for is binary or that the data that needs to go into the string is octal, or that it may need to be converted to be read on the screen?
Please see the class I have created at http://textsnip.com/see/WAVinAS3 for parsing a WAVE file in ActionScript 3.0.
This class is correctly pulling apart info from the file header & fmt chunks, isolating the data chunk, and creating a new ByteArray to store the data chunk. It takes in an uncompressed WAVE file with a format tag of 1. The WAVE file is embedded into my SWF with the following Flex embed tag:
[Embed(source="some_sound.wav", mimeType="application/octet-stream")]
public var sound_class:Class;
public var wave:WaveFile = new WaveFile(new sound_class());
After the data chunk is separated, the class attempts to make a Sound object that can stream the samples from the data chunk. I'm having issues with the streaming process, probably because I'm not good at math and don't really know what's happening with the bits/bytes, etc.
Here are the two documents I'm using as a reference for the WAVE file format:
http://www.lightlink.com/tjweber/StripWav/Canon.html
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Right now, the file IS playing back! In real time, even! But...the sound is really distorted. What's going on?
The problem is in the onSampleData handler.
In your wav file, the amplitudes are stored as signed shorts, that is 16 bit integers. You are reading them as 32 bit signed floats. Integers and floats are represented differently in binary, so that will never work right.
Now, the player expects floats. Why did they use floats? Don't know for sure, but one good reason is that it allows the player to accept a normalized value for each sample. That way you don't have to care or know what bitdept the player is using: the max value is 1, and the min value is -1, and that's it.
So, your problem is you have to convert your signed short to a normalized signed float. A short takes 16 bits, so it can store 2 ^ 16 (or 65,536) different values. Since it's signed and the sign takes up one bit, the max value will be 2 ^ 15. So, you know your input is the range -32,768 ... 32,767.
The sample value is normalized and must be in the range -1 ... 1, on the other hand.
So, you have to normalize your input. It's quite easy. Just take the read value and divide it by the max value, and you have your input amplitude converted to the range -1 ... 1.
Something like this:
private function onSampleData(evt:SampleDataEvent):void
{
var amplitude:int = 0;
var maxAmplitude:int = 1 << (bitsPerSample - 1); // or Math.pow(2, bitsPerSample - 1);
var sample:Number = 0;
var actualSamples:int = 8192;
var samplesPerChannel:int = actualSamples / channels;
for ( var c:int = 0; c < samplesPerChannel ; c++ ) {
var i:int = 0;
while(i < channels && data.bytesAvailable >= 2) {
amplitude = data.readShort();
sample = amplitude / maxAmplitude;
evt.data.writeFloat(sample);
i++;
}
}
}
A couple of things to note:
maxAmplitude could (and probably
should) be calculated when you read
the bitdepth. I'm doing it in the
method just so you can see it in the
pasted code.
Although maxAmplitude is calculated
based on the read bitdepth and thus
will be correct for any bitdepth,
I'm reading shorts in the loop, so
if your wav file happens to use a
different bitdepth, this function
will not work correctly. You could
add a switch and read the necessary
ammount of data (i.e., readInt if
bitdepth is 32). However, 16 bits is
such a widely used standard, that I
doubt this is practically needed.
This function will work for
stereo wavs. If you want it to work
for mono, re write it to write the
same sample twice. That is, for each
read, you do two writes (your input
is mono, but the player expects 2
samples).
I removed the EOF catch, as you can
know if you have enough data to read
from your buffer checking
bytesAvailable. Reaching the end of
stream is not exceptional in any
way, IMO, so I'd rather control that
case without an exception handler,
but this is just a personal
preference.
A few days ago, this was my question, and I found the answer. Maybe this will help someone else.
A. The first part of the problem: can you amplify sound using Flash? The AS3 documentation for SoundTransform says this about the volume attribute:
"The volume, ranging from 0 (silent) to 1 (full volume).
At face value, this means you can only make sounds quieter. In fact, if you supply a value greater than one (1.0), sounds will be amplified. You risk saturating the sound and getting poor quality, but you can do it, and for voice, you can get away with a lot. (Music is less forgiving, so experiment. This method does not do dynamic compression, which is better suited to music.)
B. The second part of the problem: the order in which you do things.
RIGHT:
soundTransform = new SoundTransform();
soundTransform.volume = volume * volumeAdjustment;
audioChannel.soundTransform = soundTransform;
WRONG:
soundTransform = new SoundTransform();
audioChannel.soundTransform = soundTransform;
soundTransform.volume = volume * volumeAdjustment;
I did some testing in CS3 and CS4, and got different results. In CS3, I could set the volume on the transform AFTER "audioChannel.soundTransform = soundTransform;" and everything was fine. But in CS4 it had no effect. I suspect that CS3 used pass by reference to set the soundTransform, while CS4 uses pass by value semantics and copies the object passed into it. The CS4 approach is better designed, but did break my code that worked fine in CS3.
C. The last question, is how to convert a decibel value to a factor that can be multiplied by the volume to amplify (or quiet) the sound by the desired amount.
var multiplier:Number = Math.pow(10, decibels / 20); // Power vs. amplitude
Note that "decibels" may be a positive number (to amplify) or a negative number (to make quieter). If decibels is zero, no change occurs.
A value for decibels of 3 will (to a close approximation) double the amplitude.
A value of 10 decibels will increase the volume tenfold (exactly).
Your decibel calculation should actually use 20, not 10:
var multiplier:Number = Math.pow(10, decibels / 20);
Digital audio is amplitude, not power (it's a representation of sound pressure, not sound power).