i have to read a sequence of bytes,that was written in different ways (writeBite, writeShort and writeMultiByte) and display them has list of HEX byte on video.
My problem is convert the number 1500, i tryed other number and the results was correct...
here is a an example:
var bytes:Array = [];
var ba:ByteArray = new ByteArray();
ba.writeShort(1500);
ba.position = 0;
for (var i=0; i<ba.length; i++)
{
bytes.push(ba.readByte().toString(16));
}
trace(bytes);//5,-24 i'm expetting 5,DC
The method readByte reads a signed byte (ranges from -128 to 127). The most significant bit defines the sign. In case of numbers greater than 127 (like DC) that bit will be 1 and the number will be seen as a negative number. The two's complement of the negative byte is used to get the signed value. In case of DC, which is 1101 1100 in binary the complement would be 0010 0011 which is 23. A one is added and the value will be regarded as negative, which will give you the -24 you are seeing.
You should use readUnsignedByte to read values from 0 to 255.
As there is no real Byte type in AS3, readByte() returns an int. You can try this instead:
for (var i=0; i<ba.length; i++)
{
bytes.push(ba[i].toString(16));
}
Related
If I serialize the exact analogous of an array and a vector into ByteArrays, I get almost twice the size for the vector. Check this code:
var arr:Array = [];
var vec:Vector.<int> = new Vector.<int>;
for (var i:int = 0; i < 100; ++i) {
arr.push(i);
vec.push(i);
}
var b:ByteArray;
b = new ByteArray();
b.writeObject(arr);
trace("arr",b.length); // arr 204
b = new ByteArray();
b.writeObject(vec);
trace("vec",b.length); // vec 404
Seems like a buggy or unoptimized implementation on Adobe's behalf..? Or am I missing something here?
According to AMF 3 specification AMF stores arrays as a separate, optimized for dense arrays manner, and also stores ints in an optimized manner to minimize the number of bytes used for small ints. This way, your array of values 0-99 gets stored as:
00000000 0x09 ; array marker
00000001 0x81 0x49 ; array length in U29A format = 100
00000003 0x01 ; UTF8-empty - this array is all dense
00000004 to 000000CB - integer data, first integer marker 0x04, then an integer in U29.
All your ints are less than 128, so they are represented as single byte equal to the integer. This makes a small int in an array take 2 bytes instead of full 4 or even 8 for large integers.
Now, for Vector, as Vector is dense by default, it's better to not convert an integer to U29 format, because anything bigger than 0x40000000 gets converted to DOUBLE (aka Number) and this is bad for vectors, so a Vector.<int> is stored as is, with 4 bytes per integer inside a vector.
00000000 0x0D ; vector-int marker
00000001 0x81 0x49 ; U29 length of the vector = 100
00000003 0x00 ; fixed vector marker, 0 is not fixed
00000004 to 00000193 - integers in U32
So, for small integers an array takes less space than a Vector of ints, but for large integers an array can take up to 9 bytes per int stored, while a Vector will always use 4 bytes per integer.
Consider the following alteration to your code:
var arr:Array = [];
var vec:Vector.<int> = new Vector.<int>;
for (var i:int = 0; i < 100; ++i) {
var x:int=Math.floor(Math.random()*4294967295); // 0 to 0xFFFFFFFE
arr.push(x);
vec.push(x);
trace(x);
}
var b:ByteArray;
b = new ByteArray();
b.writeObject(arr);
trace("arr",b.length); // some variable value will be shown, up to 724
b = new ByteArray();
b.writeObject(vec);
trace("vec",b.length); // here the value will always be 404
I calculated there to be 16,777,216 possible hex color code combinations.
The maximum possible characters that we can have in a single hexadecimal character is 16 and the maximum possible characters a hex color code can contain is 6, and this brought me to my conclusion of 16^6.
Is this correct? If not, please tell me how many possible color combinations there are and how it can be worked out.
There are 16,777,216 colors using #RRGGBB notation.
Each color channel is described using 1 byte of information. Byte can contain 256 different values. So for 3 channels, it's:
256^3 = 16,777,216 = 16M
However, modern browsers support transparency - #AARRGGBB, by similar logic you get:
256^4 = 4,294,967,296 = 4G
yes it's true I make a simple node program return an array of all the possible hex code here is the code
function getColors(){
var hexCode = [0,1,2,3,4,5,6,7,8,9,'A','B','C','D','E' ,'F'];
var arr = [];
for (var i = 0; i < hexCode.length; i++) {
console.log(`i done it ${i+1} times`);
for (var y = 0; y < hexCode.length; y++) {
for (var x = 0; x < hexCode.length; x++) {
for (var a = 0; a < hexCode.length; a++) {
for (var b = 0; b < hexCode.length; b++) {
for (var c = 0; c < hexCode.length; c++) {
arr.push(`#${hexCode[i]}${hexCode[y]}${hexCode[x]}${hexCode[a]}${hexCode[b]}${hexCode[c]}\n`);
}
}
}
}
}
}
return arr;
}
var colors = getColors();
console.log(colors.length);
However when I run it, it logs to the console : 16 777 216.
There are currently 184,549,376 possible color combinations in the rgba() color system, which is
R: 0 to 255 (256 values) ×
G: 0 to 255 (256 values) ×
B: 0 to 255 (256 values) ×
A: 0.0 to 1.0 (11 values)
There are 2 ways to write a color. RGB (rgb(R,G,B)) which has a range of 0-255 for red, green and blue. The Second way is hexadecimal (#RRGGBB).
In Hexadecimal there are 6 digits in total with 2 digits for each color. The maximum 2 digit value in hexadecimal is FF which in base 10 is 255.
If you think about it. RGB and HEX are similar in a way that allows you to enter 3 numbers for a Red, Green and Blue value. And the maximum value for each number is 255.
The maximum value for 6 hexadecimal digits in base 10 is 16,777,215. If you also add #000000 you get 16,777,216 as the total number of possible color combinations.
If we use RGB, the range of colors is 0-255. Meaning there are 256 possible values for each Red, Green and Blue. 256^3 is 16,777,216.
Therefore, the answer to your question is 16,777,216. No matter which way you count it.
Well, I think it's 16777216 to, because my hexadecimal converter said that ffffff was 16777215. ffffff is the highest hexadecimal color code, so that would make 16777215. However, there is also 000000, which makes the answer 16777216, since it did not include 000000.
Whoever typed that it was 16777216, you're right.
This is not really a problem as the fix is simple and pretty costless. I'm guessing it's some property of for or uint that I don't understand and I just would like to know what is going on so...
Using ActionScript 3 I set up a for loop to run backwards through the elements of a Vector.
var limit:uint = myVector.length-1;
for(var a:uint = limit; a >= 0; a--)
{
trace(a);
}
When I run this it outputs 2, 1, 0 as expected but then moves on to 4294967295 and begins counting down from there until the loop times out and throws an Error #1502.
The fix is simply to type a as int rather than uint but I don't get why. Surely I am dealing with values of 0 and greater so uint is the correct data type right?
I guess that 4294967295 is the max value for uint but how does my count get there?
If you do
var myUint:uint = 0;
trace(myUint - 1);
Then the output is -1 so why, in my loop should I suddenly jump back up to 4294967295?
Sorry for the slightly rambling question and cheers for any help.
You are close. As you said, your loop gives you 2, 1, 0, 4294967295. This is because uint can't be negative. Your loop will always run while a >= 0 and since it is never -1 to break the loop condition, it continues to loop forever.
var myUint:uint = 0;
trace(myUint - 1);
I didn't test this but what is probably happening is that myUint is being converted to an int and then having 1 subtracted. The following code should be able to confirm this.
var myUint:uint = 0;
trace((myUint - 1) is uint);
trace((myUint - 1) is int);
To fix your loop you could use int or you could use a for each(var x:Type in myVector) loop if you don't need the index (a).
an iteration and a a-- will occur when a is 0, thus as your int is not signed, the max value for this type will be set -> this is the way minus values are set in unsigned types.
Change your code into:
var limit:uint = myVector.length-1;
for(var a:uint = limit; a > 0; a--)
{
trace(a);
}
In binary - first BIT of number is minus, that is in int values. In UINT value that is just a BIT of value.
One byte int looks like 1000 0000 == -127 and 0111 1111 = 127
One byte uint looks like 1000 0000 == 128 and 0111 1111 = 128
This is it.
I want to pack epoch milliseconds into 6 bytes but i have problem. Let me introduce it:
trace(t);
for (var i:int = 6; i > 0; i--) {
dataBuffer.writeByte(((t >>> 8*(i-1)) & 255));
trace(dataBuffer[dataBuffer.length - 1]);
}
Output:
1330454496254
131
254
197
68
131
254
What i'm doing wrong?
I'm just guessing but I think your t variable is getting automatically converted to an int before the bit operation takes effect. This, of course, destroys the value.
I don't think it's possible to use Number in bit operations - AS3 only supports those with int-s.
Depending on how you acquire the value in t, you may want to start with 2 int-s and then extrat the bytes from those.
The Number type is an IEEE 754 64-bit double-precision number, which is quite a different format to your normal int. The bits aren't lined up quite the same way. What you're looking for is a ByteArray representation of a normal 64-bit int type, which of course doesn't exist in ActionScript 3.
Here's a function that converts a Number object into its "int64" equivalent:
private function numberToInt64Bytes(n:Number):ByteArray
{
// Write your IEEE 754 64-bit double-precision number to a byte array.
var b:ByteArray = new ByteArray();
b.writeDouble(n);
// Get the exponent.
var e:int = ((b[0] & 0x7F) << 4) | (b[1] >> 4);
// Significant bits.
var s:int = e - 1023;
// Number of bits to shift towards the right.
var x:int = (52 - s) % 8;
// Read and write positions in the byte array.
var r:int = 8 - int((52 - s) / 8);
var w:int = 8;
// Clear the first two bytes of the sign bit and the exponent.
b[0] &= 0x80;
b[1] &= 0xF;
// Add the "hidden" fraction bit.
b[1] |= 0x10;
// Shift everything.
while (w > 1) {
if (--r > 0) {
if (w < 8)
b[w] |= b[r] << (8 - x);
b[--w] = b[r] >> x;
} else {
b[--w] = 0;
}
}
// Now you've got your 64-bit signed two's complement integer.
return b;
}
Note that it works only with integers within a certain range, and it doesn't handle values like "not a number" and infinity. It probably also fails in other cases.
Here's a usage example:
var n:Number = 1330454496254;
var bytes:ByteArray = numberToInt64Bytes(n);
trace("bytes:",
bytes[0].toString(16),
bytes[1].toString(16),
bytes[2].toString(16),
bytes[3].toString(16),
bytes[4].toString(16),
bytes[5].toString(16),
bytes[6].toString(16),
bytes[7].toString(16)
);
Output:
bytes: 0 0 1 35 c5 44 83 fe
It should be useful for serializing data in AS3 later to be read by a Java program.
Homework assignment: Write int64BytesToNumber()
var bytes:ByteArray = new ByteArray;
bytes.writeInt(0);
trace(bytes.length); // prints 4
trace(bytes.toString().length); // prints 4
When I run the above code the output suggests that every character in the string returned by toString contains one byte from the ByteArray. This is of course great if you want to display the content of the ByteArray, but not so great if you want to send its content encoded in a string and the size of the string matters.
Is it possible to get a string from the ByteArray where every character in the string contains two bytes from the ByteArray?
You can reinterpret your ByteArray as containing only shorts. This lets you read two bytes at a time and get a single number value representing them both. Next, you can take these numbers and reinterpret them as being character codes. Finally, create a String from these character codes and you're done.
public static function encode(ba:ByteArray):String {
var origPos:uint = ba.position;
var result:Array = new Array();
for (ba.position = 0; ba.position < ba.length - 1; )
result.push(ba.readShort());
if (ba.position != ba.length)
result.push(ba.readByte() << 8);
ba.position = origPos;
return String.fromCharCode.apply(null, result);
}
There is one special circumstance to pay attention to. If you try reading a short from a ByteArray when there is only one byte remaining in it, an exception will be thrown. In this case, you should call readByte with the value shifted 8 bits instead. This is the same as if the original ByteArray had an extra 0 byte at the end. (making it even in length)
Now, as for decoding this String... Get the character code of each character, and place them into a new ByteArray as shorts. It will be identical to the original, except if the original had an odd number of bytes, in which case the decoded ByteArray will have an extra 0 byte at the end.
public static function decode(str:String):ByteArray {
var result:ByteArray = new ByteArray();
for (var i:int = 0; i < str.length; ++i) {
result.writeShort(str.charCodeAt(i));
}
result.position = 0;
return result;
}
In action:
var ba:ByteArray = new ByteArray();
ba.writeInt(47);
ba.writeUTF("Goodbye, cruel world!");
var str:String = encode(ba);
ba = decode(str);
trace(ba.readInt());
trace(ba.readUTF());
Your question is a bit confusing. You have written a 4 byte int to your ByteArray. You haven't written any characters (unicode or other) to it. If you want to pass text, write text and pass it as UTF8. It will take less space than having two bytes for every character, at least for most western languages.
But honestly, I'm not sure I understood what you are trying to accomplish. Do you want to send numbers or text? What backend are you talking to? Do you need a ByteArray at all?