I'm examining the URIEncodingBitmap class of the com.adobe.net package, and I'm having a hard time understanding the internal workings, exactly. Here's the code:
package com.adobe.net
{
import flash.utils.ByteArray;
/**
* This class implements an efficient lookup table for URI
* character escaping. This class is only needed if you
* create a derived class of URI to handle custom URI
* syntax. This class is used internally by URI.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0*
*/
public class URIEncodingBitmap extends ByteArray
{
/**
* Constructor. Creates an encoding bitmap using the given
* string of characters as the set of characters that need
* to be URI escaped.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0
*/
public function URIEncodingBitmap(charsToEscape:String) : void
{
var i:int;
var data:ByteArray = new ByteArray();
// Initialize our 128 bits (16 bytes) to zero
for (i = 0; i < 16; i++)
this.writeByte(0);
data.writeUTFBytes(charsToEscape);
data.position = 0;
while (data.bytesAvailable)
{
var c:int = data.readByte();
if (c > 0x7f)
continue; // only escape low bytes
var enc:int;
this.position = (c >> 3);
enc = this.readByte();
enc |= 1 << (c & 0x7);
this.position = (c >> 3);
this.writeByte(enc);
}
}
/**
* Based on the data table contained in this object, check
* if the given character should be escaped.
*
* #param char the character to be escaped. Only the first
* character in the string is used. Any other characters
* are ignored.
*
* #return the integer value of the raw UTF8 character. For
* example, if '%' is given, the return value is 37 (0x25).
* If the character given does not need to be escaped, the
* return value is zero.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0
*/
public function ShouldEscape(char:String) : int
{
var data:ByteArray = new ByteArray();
var c:int, mask:int;
// write the character into a ByteArray so
// we can pull it out as a raw byte value.
data.writeUTFBytes(char);
data.position = 0;
c = data.readByte();
if (c & 0x80)
{
// don't escape high byte characters. It can make international
// URI's unreadable. We just want to escape characters that would
// make URI syntax ambiguous.
return 0;
}
else if ((c < 0x1f) || (c == 0x7f))
{
// control characters must be escaped.
return c;
}
this.position = (c >> 3);
mask = this.readByte();
if (mask & (1 << (c & 0x7)))
{
// we need to escape this, return the numeric value
// of the character
return c;
}
else
{
return 0;
}
}
}
}
Although I understand the workings of ByteArray and the workings of various (bitwise) operators (>>, <<, &, |=, etc.), I'm almost at a complete loss of what this class does exactly (or rather: why it does things the way it does).
Could somebody give a run down on what the purpose of all the bit-shifting and masking is in this class? Particularly:
What is the constructor initializing exactly, and why?
a. What is this.position = (c >> 3); doing, or rather, why?
b. What is enc |= 1 << (c & 0x7); doing?
What is the mask doing exactly in ShouldEscape()?
ad 1. Constructor creates an escape definition array (length 16 bytes = 128 bits). One bit per character. Position of the bit corresponds to the ordinal value of character and its value means whether character should be escaped or not.
ad a. This row calculates appropriate byte in escape definition array for given character.
ad b. Sets bit corresponding to character within the byte.
ad 2. Mask contains appropriate byte for given character and is used to check whether the corresponding bit is set or not.
Related
I am programming a display and I am able to display characters on the display by using this function:
void printChar(char *tekst, uint16_t xPos, uint16_t yPos)
{
//calculate the position the first byte should be placed on
uint16_t startPos = yPos * width_disp_byte + (xPos/8);
int i;
//put all 16 bytes on the right place on the display based on the users' input
for(i=0;i<16;i++)
{
test_image[startPos]=convertChar(*tekst,i);
startPos += width_disp_byte;
}
}
Basically I get a character and find its location in an array that I build. Than I take 16 bytes of data and put this in the display.
The next step is to display integer variables on the display. I have written a code that looks like this:
void printVariable(uint16_t integer, uint16_t xPos, uint16_t yPos)
{
uint16_t value;
uint16_t remainder;
char printValue;
//max value is 9999 that can be displayed, caused by the '4' in the for loop
for(int i = 0; i < 4;i++)
{
value = integer;
//int_power calculates the divisor. going from 3 to 0. cq 1000,100,10,1.
value /= int_power(10,(3-i));
//remove highest number from integer value (for 312, remove the 3 by substracting 300)
integer -= (value) * (int_power(10,(3-i)));
// add '0' to value to get the correct ASCII value.
value += '0';
// convert uint16_t value into char
printValue = value;
printChar(printValue,xPos,yPos);
xPos += 8;
}
}
I take a variable, lets say 3164. The first step is to divide this by 1000. The answer will be 3, since it's an integer. I display this character using the printChar function.
the next step removes 3000 from 3164 and divides the value by 100, resulting in 1. Again this value is printed using the printf function. Then 100 is removed from from 164 and then gets divided by 10 etc etc.
This code is quite limited in its use, but it fits perfectly in what I want to achieve. There is no need to print variables within a string.
The problem here is that the printChar function does not work like I have written in the code. Normally I would use the printChar function like this:
printChar("F",0,0);
This would print the character F in the topleft corner. If I want to use the printChar function like this, it doesn't work:
printChar(printValue,xPos,yPos);
The warning message says:
incompatible integer to pointer conversion passing 'char' to parameter of type 'char *'; take the address with &
If I take the address with & I don't get the correct value displayed on my display.
How can I fix this?
You only want to print ONE character, so you do not need a pointer as parameter. Your function would work like this:
void printChar(char tekst, uint16_t xPos, uint16_t yPos){
...
//depending on the Parameters of your convertChar- function either
... = convertChar(tekst,i); // if you can modify this function too
... = convertChar(&tekst,i); // if the function is to use as it is
}
The difference is in char tekst instead of char * text
I try to add two numbers, but don't get correct result
var n1:Number = 2785077255;
var n2:Number = 100000097214922752;
trace(Number(n1 + n2));//trace 100000100000000000, not 100000100000000007
trace((Number.MAX_VALUE - Number(n1 + n2)) > 100);//trace true
When I got the wrong result, I thought it exceed the Number's max value,so I test it and it doesn't trace false as I thought.
Yes, the problem is in Number as #Phylogenesis mentioned, it's actually 64 bit double with 52 bits for mantis, but your result exceededs that.
The good news are that there is a workaround for that, event two :)
Use some BigInteger/LongInt AS3 impelementation (you can google several of them), for instance BigInteger from as3crypto, or LongInt from lodgamebox
It's currently only for multiplying, but you can modify that solution as a small task. For best performance (without creation of temporary arrays/byte arrays) you can use that utility method that I created once (it's based on LongInt from lodgamebox library)
/**
* Safe multiplying of two 32 bits uint without precision lost.
*
* Usage:
* Default behaviour (with 64 bit Number mantis overflow):
* uint(1234567890 * 134775813) = 1878152736
*
* Fixed correct result by that method:
* uint(1234567890 * 134775813) = 1878152730
*
* #param val1
* #param val2
* #return
*
*/
public static function multiplyLong(val1:uint, val2:uint):uint
{
var resNum:Number = val1*val2;
//52 bits of mantis in 64 bit double (Number) without loose in precision
if(resNum <= 0xFFFFFFFFFFFFF)
return uint(resNum);
//count only low 32 bits of multiplying result
var i:uint, mul:Number, ln:uint=0, hn:uint=0, _low:uint = val1;
for (i = 1<<31; i; i >>>= 1)
{
if(val2 & i)
{
mul = _low * i;
ln += mul & uint.MAX_VALUE;
}
}
_low = ln;
return _low;
}
I'm moving to MYSQL from files and I want to use md5 instead of Encryption
public Encrypt(string[])
{
for(new x=0; x < strlen(string); x++)
{
string[x] += (3^x) * (x % 15);
if(string[x] > (0xff))
{
string[x] -= 256;
}
}
return 1;
}
But I need to decrypt it. I don't know how to make a decrypting function. Can anybody help me?
My understanding of PAWN is that it uses null-terminated strings. If that is the case, then this encryption is not a reversable process in general.
Consider a string where the thirteenth character (string[12]) is 'L'. The offset that will be added to that is (3^12) * (12 % 15), i.e. 180. In ASCII, the character 'L' has a value of 76, which when added to 180 is 256. After wrapping to fit in the 0-255 character range that becomes a zero, potentially terminating your encrypted string somewhere in the middle.
If you are storing the length of the original string separately or it's always a fixed length, then maybe this isn't a problem. But if you are relying on a null terminator to determine the length of the string, it isn't going to work.
It looks like the "encryption" adds to each character a number derived from its position. The encryption can be undone by subtracting by the same number.
public Decrypt(string[])
{
for(new x=0; x < strlen(string); x++)
{
string[x] -= (3^x) * (x % 15);
if(string[x] < (0x00))
{
string[x] += 256;
}
}
return 1;
}
I am trying to understand the significance of null bytes in a ByteArray. Do they act like a terminator? I mean, can we not write further into the ByteArray once a null byte has been written?
For instance,
import flash.utils.*;
public class print3r{
public function print3r{
Util.print(nullout());
}
public function nullout:ByteArray (){
var bytes:ByteArray = new ByteArray();
bytes.writeInt(((403705888 + 1) - 1)); // Non Printable Characters
bytes.writeInt(((403705872 - 1) + 1)); // Non Printable Characters
bytes.writeInt(0x18101000); // Notice the NullByte in this DWORD
bytes.writeInt(0x41424344); // ASCII Characters ABCD
return bytes;
}
}
new print3r;
This gives a blank output.
Now, if I replace the DWORD, 0x18101000 with 0x18101010, this time I can see the ASCII padding, ABCD in the output.
My question is that, is it possible to write past the null byte into the ByteArray()?
The reason I ask is because I have seen in an ActionScript code, that a lot of writeInt and writeByte operations are performed on the ByteArray even after the null byte is written.
Thanks.
is it possible to write past the null byte into the ByteArray()?
Of course it is. ByteArray -- is a chunk of raw data. You can write whatever you like there, and you can read in whatever way you like (using zero bytes as delimiters or whatever else you may want to do).
What you see when you send your bytes to standard output with trace(), depends solely on what you actually do with your data to convert it to a string. There are several ways of converting an array of bytes to string. So, your question is missing the explanation of what Util.print() method does.
Here are several options for converting bytes to a string:
Loop through bytes and output characters, encoding is up to you.
Read a string with ByteArray.readUTFBytes(). This method reads utf-encoded symbols; it stops when zero character is encountered.
Read a string with ByteArray.readUTF(). This method expects your string to be prefixed with unsigned short indicating its length. In other terms it is the same as ByteArray.readUTFBytes().
Use ByteArray.toString(). This is what happens when you simply do trace(byteArray);. This method ignores zero bytes and outputs the rest. This method uses System.useCodePage setting to decide on the encoding, and can use UTF BOM if the data begins with it.
Here are some tests that illustrate the above:
var test:ByteArray = new ByteArray();
// latin (1 byte per character)
test.writeUTFBytes("ABC");
// zero byte
test.writeByte(0);
// cyrillic (2 bytes per character)
test.writeUTFBytes("\u0410\u0411\u0412");
trace(test); // ABCАБВ
trace(test.toString()); // ABCАБВ
test.position = 0;
trace(test.readUTFBytes(test.length)); // ABC
// simple loop
var output:String = "";
var byte:uint;
for (var i:uint = 0; i<test.length; i+=1) {
byte = uint(test[i]);
if (output.length && i%4 == 0) {
output += " ";
}
output += (byte > 0xF ? "" : "0") + byte.toString(16);
}
trace(output); // 41424300 d090d091 d092
Writing a null to a byte array has no significance as far as I know. The print function might however use it as a string terminator.
Given a set of letters, say from A.. F, how can one generate a combination of these letters for a specific length. i.e for length 4, generate all string containing these letters {AAAA, ABCD, ...} (duplicates included). I am not able to understand how to come out with a code that does it.This is pertaining to the Mastermind game that I am trying to simulate. Is there any algorithm to perform this generation.
regards,
darkie
There is an algorithm called Heap's Algorithm for generating permutations. This might suit your purposes. I found an example implementation here
I'm not sure what the name would be of such an algorithm, but it is a recursive one. That is, have a method that figures out one character, and simply keep calling itself until you're at the desired length of string that you want, then start filling in your array. Here's some sample C# code that should help:
public void GetPermutations()
{
string currentPrefix = ""; // Just a starting point
int currentLength = 1; // one-based
int desiredLength = 4; // one-based
string alphabet = "ABCDEF"; // Characters to build permutations from
List<string> permutations = new List<string>();
FillPermutations(currentPrefix, currentLength, alphabet, desiredLength, permutations);
}
public void FillPermutations(string currentPrefix, int currentLength, string alphabet, int desiredLength, List<string> permutations)
{
// If we're not at the desired depth yet, keep calling this function recursively
// until we attain what we want.
for (int i = 0; i < alphabet.Length; i++)
{
string currentPermutation = currentPrefix + alphabet[i].ToString();
if (currentLength < desiredLength)
{
// Increase current length by one and recurse. Current permutation becomes new prefix
int newCurrentLength = currentLength + 1;
FillPermutations(currentPermutation, newCurrentLength, alphabet, desiredLength, permutations);
}
else
{
// We're at the desired length, so add this permutation to the list
permutations.Add(currentPermutation);
}
}
}