AS3 convert ByteArray of ARGB values to RGB uint - actionscript-3

I have a C++ QT application that is sending a ByteArray of ARGB values via a TCP socket. The image is 100 x 100. The problem is that when I read off the ByteArray I can't seem to get the image to display right. I've tried lots of different ways and here is AS3 code that's got me the closest.
bitmapData.lock();
for(var x = 0;x<100; x++){
for(var y = 0;y<100; y++){
alphaValue = byteArray.readUnsignedByte();
redValue = byteArray.readUnsignedByte();
greenValue = byteArray.readUnsignedByte();
blueValue = byteArray.readUnsignedByte();
color = redValue << 16 | greenValue << 8 | blueValue;
bitmapData.setPixel(y,x,color);
}
}
bitmapData.unlock();
I'm just not using the alpha value at the moment because I'm not sure what to do with it. Qt says "The image is stored using a 32-bit RGB format (0xffRRGGBB)." I tried using readUnsignedInt() too, but that didn't work either.
Here is what I'm trying to send and what I see on screen.
Sent : Received in Flash
It's close, but not just right. Does anyone have any idea what I may be doing wrong?

Much easier would be to use readUnsignedInt instead of multiple calls to readByte. Then use bitmap.setPixel32 instead of setPixel. That way you don't have to deconstruct and reconstruct the values.
You might also have to switch the endian-ness (e.g. byteArray.endian = Endian.LITTLE_ENDIAN) depending on how it's encoded. I just can't remember off-hand which way around it is, so you can try that if it doesn't work first time.

Related

LibTiff C# - Getting coordinates

I am using the LibTiff.NET library to load GeoTiff data in C# (inside Unity).
**NOTE - I looked at GDAL also, but faced similar issues as outlined below, and would much prefer to use LibTiff if possible.
I would ultimately like to be able to take a lat/long value and have a function that returns a chunk of pixel data for a 50m area around that point, streamed from a GeoTiff image on disk (not storing whole image in RAM).
I have a test file that is representative of what my software will be given in production.
I am trying to figure out how to read or compute the lat/long extents of the test file image, as I can't find a good tutorial or sample online which contains this functionality.
I can read the width+height of the file in the TiffTags, but many other values that seem critical for computing the extents such as X and Y resolutions are not present.
It also appears like the lat/long extents (or a bounding box) are not present in the tags.
At this point I am led to believe there may be more tags or header data that I am not familiar with because when I load the test file into Caris EasyView I can see a number of properties that I would like to read or compute from the file:
Is it possible to obtain this data using LibTiff?
Or is there a better system I should use? (wrapped GDAL maybe?)
** NOTE: I cannot link the test file due to NDA, plus it's enormous
This is for a 32-bit geotiff:
int width = tiff.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int height = tiff.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int samplesPerPixel = tiff.GetField(TiffTag.SAMPLESPERPIXEL)[0].ToInt();
int bitsPerSample = tiff.GetField(TiffTag.BITSPERSAMPLE)[0].ToInt();
int bytesPerSample = bitsPerSample / 8;
byte[] scanline = new byte[tiff.ScanlineSize()];
float[] scanline32Bit = new float[tiff.ScanlineSize() / 4];//divide by 4 for byte word
FieldValue[] modelTiePointTag = tiff.GetField(TiffTag.GEOTIFF_MODELTIEPOINTTAG);
byte[] modelTransformation = modelTiePointTag[1].GetBytes();
double originLon = BitConverter.ToDouble(modelTransformation, 24);
double originLat = BitConverter.ToDouble(modelTransformation, 32);
FieldValue[] modelPixelScaleTag = tiff.GetField(TiffTag.GEOTIFF_MODELPIXELSCALETAG);
byte[] modelPixelScale = modelPixelScaleTag[1].GetBytes();
double pixPerLong = BitConverter.ToDouble(modelPixelScale, 0);
double pixPerLat = BitConverter.ToDouble(modelPixelScale, 8) * -1;

MFT Custom image Filters

I am currently developing a Metro Style App which uses an MFT (Media Foundation Transform) to filter the webcam's video stream into grayscale, as demonstrated in this sample.
However, now I want to apply other types of filters, such as exposure, hue, luminance, texture, vignette, etc. This answer says I am supposed to modify the TransformChroma method in order to achieve this. Unfortunately, I can't figure out how to get the Y value, I can only get the U and V. How do I get the Y value in the formats NV12, YUY2, and UYVY?
All help is greatly appreciated and I always accept an answer!
You would need to change the signature of the method (an poto take another parameter and modify the TransformImage_UYVY, TransformImage_YUY2 and TransformImage_NV12 methods to pass that parameter into the updated method. You would need to figure out how to extract that value for yourself though though. For example looking at this piece of code below you can see how the U and V values are extracted and that the Y value is split in two bytes - you would need to do some bit logic to join these. You can find descriptions of these formats online, e.g. here.
// Byte order is U0 Y0 V0 Y1
// Each WORD is a byte pair (U/V, Y)
// Windows is little-endian so the order appears reversed.
BYTE u = pSrc_Pixel[x] & 0x00FF;
BYTE v = pSrc_Pixel[x+1] & 0x00FF;
TransformChroma(mat, &u, &v);

as3 bytearray splice

I'm pretty much an actionscript novice and I'm trying just slice the first and last X bytes out of a byte array in as3, and can't seem to find anything anywhere on how to do that.
If it matters, the byte array is a set of floats recorded from a microphone that I'm trying to cut the first and last 1/4 of a second off of before it's encoded as a .wav file.
Assuming you an existing ByteArray, let's call it rawBytes:
var trimmedBytes:ByteArray = new ByteArray();
var quarterSecond:int = 1000; // no. bytes per 1/4 second (arbitrary estimate)
rawBytes.readBytes(trimmedBytes, quarterSecond, rawBytes.length - quarterSecond * 2);
Your trimmedBytes variable will now be populated with the second recording minus the first and last quarter second - assuming that quarterSecond variable has the right value. I don't know what that value should be, I'd imagine it would depend on the bitrate at which you're recording. You could probably get there via trial and error though!

Can someone translate this C++ into AS3?

This code stores the sqrt() of the numbers from 0 to 4095 in a table, and I would like to translate it into Actionscript 3.
unsigned short int_sqrt_x1024[4096];
for (int i=0; i<sizeof(int_sqrt_x1024)/sizeof(int_sqrt_x1024[0]); i++)
int_sqrt_x1024[i] = (int)(sqrtf((float)i + 0.5f) * 1024.0f);
I've done it halfway, but the 'sizeof' parts got me, I havent got a clue what to do with those!
So based on your suggestions I've come up with this, what do you think???:
var int_sqrt_x1024:Vector.<uint> = new Vector.<uint>(4096,true)
for (var i:int = 0; i < int_sqrt_x1024.length; i++)
int_sqrt_x1024[i] = Math.sqrt( i + 0.5) * 1024;
You can find the definition of sizeof HERE. To the best of my knowledge, there is no analogous operator in AS3. I have never encountered anything like it in documentation, and searches reveal nothing.
In fact, the closest thing I can find to it is the completely unrelated ByteArray, which I can guarantee would not achieve the same end, as one is an advanced data type and the other is an operator. Their usages aren't even similar.
I am curious, what is the goal of this code? Perhaps there is another way to achieve the same end. (And apparently from reading comments, there is actually a better way.)
EDIT: See Basic's comment below...there may be something similar.
Sorry, I can't provide a translation since I don't know Actionscript, but I think this will help you out too:
The C sizeof-Operator returns the size in bytes of its argument. This is not something you need to concern yourself with in a "managed" language like Actionscript. What the C code you posted (I don't really see anything in it that would necessarily make it C++) does, is iterating through the loop (size_of_the_array_in_bytes / size_of_one_array_element_in_bytes) times. In your case, that complicated expression would simply evaluate to 4096.
In other worlds, make a loop that executes the store of the square root 4096 times.
The C-code you're using as a basis seems to be pretty poorly written. I can't seem to find a reason one would use such a complicated, verbose and unreadable way to fill a simple lookup table. IMO, it should be something like this:
#define LOOKUPTABLE_LENGTH 4096
unsigned short int_sqrt_x1024[LOOKUPTABLE_LENGTH];
for (int i=0; i<LOOKUPTABLE_LENGTH; i++)
int_sqrt_x1024[i] = (int)(sqrtf((float)i + 0.5f) * 1024.0f);
Much more readable, no?

How do I play back a WAV in ActionScript?

Please see the class I have created at http://textsnip.com/see/WAVinAS3 for parsing a WAVE file in ActionScript 3.0.
This class is correctly pulling apart info from the file header & fmt chunks, isolating the data chunk, and creating a new ByteArray to store the data chunk. It takes in an uncompressed WAVE file with a format tag of 1. The WAVE file is embedded into my SWF with the following Flex embed tag:
[Embed(source="some_sound.wav", mimeType="application/octet-stream")]
public var sound_class:Class;
public var wave:WaveFile = new WaveFile(new sound_class());
After the data chunk is separated, the class attempts to make a Sound object that can stream the samples from the data chunk. I'm having issues with the streaming process, probably because I'm not good at math and don't really know what's happening with the bits/bytes, etc.
Here are the two documents I'm using as a reference for the WAVE file format:
http://www.lightlink.com/tjweber/StripWav/Canon.html
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Right now, the file IS playing back! In real time, even! But...the sound is really distorted. What's going on?
The problem is in the onSampleData handler.
In your wav file, the amplitudes are stored as signed shorts, that is 16 bit integers. You are reading them as 32 bit signed floats. Integers and floats are represented differently in binary, so that will never work right.
Now, the player expects floats. Why did they use floats? Don't know for sure, but one good reason is that it allows the player to accept a normalized value for each sample. That way you don't have to care or know what bitdept the player is using: the max value is 1, and the min value is -1, and that's it.
So, your problem is you have to convert your signed short to a normalized signed float. A short takes 16 bits, so it can store 2 ^ 16 (or 65,536) different values. Since it's signed and the sign takes up one bit, the max value will be 2 ^ 15. So, you know your input is the range -32,768 ... 32,767.
The sample value is normalized and must be in the range -1 ... 1, on the other hand.
So, you have to normalize your input. It's quite easy. Just take the read value and divide it by the max value, and you have your input amplitude converted to the range -1 ... 1.
Something like this:
private function onSampleData(evt:SampleDataEvent):void
{
var amplitude:int = 0;
var maxAmplitude:int = 1 << (bitsPerSample - 1); // or Math.pow(2, bitsPerSample - 1);
var sample:Number = 0;
var actualSamples:int = 8192;
var samplesPerChannel:int = actualSamples / channels;
for ( var c:int = 0; c < samplesPerChannel ; c++ ) {
var i:int = 0;
while(i < channels && data.bytesAvailable >= 2) {
amplitude = data.readShort();
sample = amplitude / maxAmplitude;
evt.data.writeFloat(sample);
i++;
}
}
}
A couple of things to note:
maxAmplitude could (and probably
should) be calculated when you read
the bitdepth. I'm doing it in the
method just so you can see it in the
pasted code.
Although maxAmplitude is calculated
based on the read bitdepth and thus
will be correct for any bitdepth,
I'm reading shorts in the loop, so
if your wav file happens to use a
different bitdepth, this function
will not work correctly. You could
add a switch and read the necessary
ammount of data (i.e., readInt if
bitdepth is 32). However, 16 bits is
such a widely used standard, that I
doubt this is practically needed.
This function will work for
stereo wavs. If you want it to work
for mono, re write it to write the
same sample twice. That is, for each
read, you do two writes (your input
is mono, but the player expects 2
samples).
I removed the EOF catch, as you can
know if you have enough data to read
from your buffer checking
bytesAvailable. Reaching the end of
stream is not exceptional in any
way, IMO, so I'd rather control that
case without an exception handler,
but this is just a personal
preference.