Decrypting Unknown Hash Encryption - mysql

I'm moving to MYSQL from files and I want to use md5 instead of Encryption
public Encrypt(string[])
{
for(new x=0; x < strlen(string); x++)
{
string[x] += (3^x) * (x % 15);
if(string[x] > (0xff))
{
string[x] -= 256;
}
}
return 1;
}
But I need to decrypt it. I don't know how to make a decrypting function. Can anybody help me?

My understanding of PAWN is that it uses null-terminated strings. If that is the case, then this encryption is not a reversable process in general.
Consider a string where the thirteenth character (string[12]) is 'L'. The offset that will be added to that is (3^12) * (12 % 15), i.e. 180. In ASCII, the character 'L' has a value of 76, which when added to 180 is 256. After wrapping to fit in the 0-255 character range that becomes a zero, potentially terminating your encrypted string somewhere in the middle.
If you are storing the length of the original string separately or it's always a fixed length, then maybe this isn't a problem. But if you are relying on a null terminator to determine the length of the string, it isn't going to work.

It looks like the "encryption" adds to each character a number derived from its position. The encryption can be undone by subtracting by the same number.
public Decrypt(string[])
{
for(new x=0; x < strlen(string); x++)
{
string[x] -= (3^x) * (x % 15);
if(string[x] < (0x00))
{
string[x] += 256;
}
}
return 1;
}

Related

Trying to get my own printf function to work with variables

I am programming a display and I am able to display characters on the display by using this function:
void printChar(char *tekst, uint16_t xPos, uint16_t yPos)
{
//calculate the position the first byte should be placed on
uint16_t startPos = yPos * width_disp_byte + (xPos/8);
int i;
//put all 16 bytes on the right place on the display based on the users' input
for(i=0;i<16;i++)
{
test_image[startPos]=convertChar(*tekst,i);
startPos += width_disp_byte;
}
}
Basically I get a character and find its location in an array that I build. Than I take 16 bytes of data and put this in the display.
The next step is to display integer variables on the display. I have written a code that looks like this:
void printVariable(uint16_t integer, uint16_t xPos, uint16_t yPos)
{
uint16_t value;
uint16_t remainder;
char printValue;
//max value is 9999 that can be displayed, caused by the '4' in the for loop
for(int i = 0; i < 4;i++)
{
value = integer;
//int_power calculates the divisor. going from 3 to 0. cq 1000,100,10,1.
value /= int_power(10,(3-i));
//remove highest number from integer value (for 312, remove the 3 by substracting 300)
integer -= (value) * (int_power(10,(3-i)));
// add '0' to value to get the correct ASCII value.
value += '0';
// convert uint16_t value into char
printValue = value;
printChar(printValue,xPos,yPos);
xPos += 8;
}
}
I take a variable, lets say 3164. The first step is to divide this by 1000. The answer will be 3, since it's an integer. I display this character using the printChar function.
the next step removes 3000 from 3164 and divides the value by 100, resulting in 1. Again this value is printed using the printf function. Then 100 is removed from from 164 and then gets divided by 10 etc etc.
This code is quite limited in its use, but it fits perfectly in what I want to achieve. There is no need to print variables within a string.
The problem here is that the printChar function does not work like I have written in the code. Normally I would use the printChar function like this:
printChar("F",0,0);
This would print the character F in the topleft corner. If I want to use the printChar function like this, it doesn't work:
printChar(printValue,xPos,yPos);
The warning message says:
incompatible integer to pointer conversion passing 'char' to parameter of type 'char *'; take the address with &
If I take the address with & I don't get the correct value displayed on my display.
How can I fix this?
You only want to print ONE character, so you do not need a pointer as parameter. Your function would work like this:
void printChar(char tekst, uint16_t xPos, uint16_t yPos){
...
//depending on the Parameters of your convertChar- function either
... = convertChar(tekst,i); // if you can modify this function too
... = convertChar(&tekst,i); // if the function is to use as it is
}
The difference is in char tekst instead of char * text

Trying to understand the workings of com.adobe.net.URIEncodingBitmap

I'm examining the URIEncodingBitmap class of the com.adobe.net package, and I'm having a hard time understanding the internal workings, exactly. Here's the code:
package com.adobe.net
{
import flash.utils.ByteArray;
/**
* This class implements an efficient lookup table for URI
* character escaping. This class is only needed if you
* create a derived class of URI to handle custom URI
* syntax. This class is used internally by URI.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0*
*/
public class URIEncodingBitmap extends ByteArray
{
/**
* Constructor. Creates an encoding bitmap using the given
* string of characters as the set of characters that need
* to be URI escaped.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0
*/
public function URIEncodingBitmap(charsToEscape:String) : void
{
var i:int;
var data:ByteArray = new ByteArray();
// Initialize our 128 bits (16 bytes) to zero
for (i = 0; i < 16; i++)
this.writeByte(0);
data.writeUTFBytes(charsToEscape);
data.position = 0;
while (data.bytesAvailable)
{
var c:int = data.readByte();
if (c > 0x7f)
continue; // only escape low bytes
var enc:int;
this.position = (c >> 3);
enc = this.readByte();
enc |= 1 << (c & 0x7);
this.position = (c >> 3);
this.writeByte(enc);
}
}
/**
* Based on the data table contained in this object, check
* if the given character should be escaped.
*
* #param char the character to be escaped. Only the first
* character in the string is used. Any other characters
* are ignored.
*
* #return the integer value of the raw UTF8 character. For
* example, if '%' is given, the return value is 37 (0x25).
* If the character given does not need to be escaped, the
* return value is zero.
*
* #langversion ActionScript 3.0
* #playerversion Flash 9.0
*/
public function ShouldEscape(char:String) : int
{
var data:ByteArray = new ByteArray();
var c:int, mask:int;
// write the character into a ByteArray so
// we can pull it out as a raw byte value.
data.writeUTFBytes(char);
data.position = 0;
c = data.readByte();
if (c & 0x80)
{
// don't escape high byte characters. It can make international
// URI's unreadable. We just want to escape characters that would
// make URI syntax ambiguous.
return 0;
}
else if ((c < 0x1f) || (c == 0x7f))
{
// control characters must be escaped.
return c;
}
this.position = (c >> 3);
mask = this.readByte();
if (mask & (1 << (c & 0x7)))
{
// we need to escape this, return the numeric value
// of the character
return c;
}
else
{
return 0;
}
}
}
}
Although I understand the workings of ByteArray and the workings of various (bitwise) operators (>>, <<, &, |=, etc.), I'm almost at a complete loss of what this class does exactly (or rather: why it does things the way it does).
Could somebody give a run down on what the purpose of all the bit-shifting and masking is in this class? Particularly:
What is the constructor initializing exactly, and why?
a. What is this.position = (c >> 3); doing, or rather, why?
b. What is enc |= 1 << (c & 0x7); doing?
What is the mask doing exactly in ShouldEscape()?
ad 1. Constructor creates an escape definition array (length 16 bytes = 128 bits). One bit per character. Position of the bit corresponds to the ordinal value of character and its value means whether character should be escaped or not.
ad a. This row calculates appropriate byte in escape definition array for given character.
ad b. Sets bit corresponding to character within the byte.
ad 2. Mask contains appropriate byte for given character and is used to check whether the corresponding bit is set or not.

Why does uint break my for loop?

This is not really a problem as the fix is simple and pretty costless. I'm guessing it's some property of for or uint that I don't understand and I just would like to know what is going on so...
Using ActionScript 3 I set up a for loop to run backwards through the elements of a Vector.
var limit:uint = myVector.length-1;
for(var a:uint = limit; a >= 0; a--)
{
trace(a);
}
When I run this it outputs 2, 1, 0 as expected but then moves on to 4294967295 and begins counting down from there until the loop times out and throws an Error #1502.
The fix is simply to type a as int rather than uint but I don't get why. Surely I am dealing with values of 0 and greater so uint is the correct data type right?
I guess that 4294967295 is the max value for uint but how does my count get there?
If you do
var myUint:uint = 0;
trace(myUint - 1);
Then the output is -1 so why, in my loop should I suddenly jump back up to 4294967295?
Sorry for the slightly rambling question and cheers for any help.
You are close. As you said, your loop gives you 2, 1, 0, 4294967295. This is because uint can't be negative. Your loop will always run while a >= 0 and since it is never -1 to break the loop condition, it continues to loop forever.
var myUint:uint = 0;
trace(myUint - 1);
I didn't test this but what is probably happening is that myUint is being converted to an int and then having 1 subtracted. The following code should be able to confirm this.
var myUint:uint = 0;
trace((myUint - 1) is uint);
trace((myUint - 1) is int);
To fix your loop you could use int or you could use a for each(var x:Type in myVector) loop if you don't need the index (a).
an iteration and a a-- will occur when a is 0, thus as your int is not signed, the max value for this type will be set -> this is the way minus values are set in unsigned types.
Change your code into:
var limit:uint = myVector.length-1;
for(var a:uint = limit; a > 0; a--)
{
trace(a);
}
In binary - first BIT of number is minus, that is in int values. In UINT value that is just a BIT of value.
One byte int looks like 1000 0000 == -127 and 0111 1111 = 127
One byte uint looks like 1000 0000 == 128 and 0111 1111 = 128
This is it.

Number type and bitwise operations

I want to pack epoch milliseconds into 6 bytes but i have problem. Let me introduce it:
trace(t);
for (var i:int = 6; i > 0; i--) {
dataBuffer.writeByte(((t >>> 8*(i-1)) & 255));
trace(dataBuffer[dataBuffer.length - 1]);
}
Output:
1330454496254
131
254
197
68
131
254
What i'm doing wrong?
I'm just guessing but I think your t variable is getting automatically converted to an int before the bit operation takes effect. This, of course, destroys the value.
I don't think it's possible to use Number in bit operations - AS3 only supports those with int-s.
Depending on how you acquire the value in t, you may want to start with 2 int-s and then extrat the bytes from those.
The Number type is an IEEE 754 64-bit double-precision number, which is quite a different format to your normal int. The bits aren't lined up quite the same way. What you're looking for is a ByteArray representation of a normal 64-bit int type, which of course doesn't exist in ActionScript 3.
Here's a function that converts a Number object into its "int64" equivalent:
private function numberToInt64Bytes(n:Number):ByteArray
{
// Write your IEEE 754 64-bit double-precision number to a byte array.
var b:ByteArray = new ByteArray();
b.writeDouble(n);
// Get the exponent.
var e:int = ((b[0] & 0x7F) << 4) | (b[1] >> 4);
// Significant bits.
var s:int = e - 1023;
// Number of bits to shift towards the right.
var x:int = (52 - s) % 8;
// Read and write positions in the byte array.
var r:int = 8 - int((52 - s) / 8);
var w:int = 8;
// Clear the first two bytes of the sign bit and the exponent.
b[0] &= 0x80;
b[1] &= 0xF;
// Add the "hidden" fraction bit.
b[1] |= 0x10;
// Shift everything.
while (w > 1) {
if (--r > 0) {
if (w < 8)
b[w] |= b[r] << (8 - x);
b[--w] = b[r] >> x;
} else {
b[--w] = 0;
}
}
// Now you've got your 64-bit signed two's complement integer.
return b;
}
Note that it works only with integers within a certain range, and it doesn't handle values like "not a number" and infinity. It probably also fails in other cases.
Here's a usage example:
var n:Number = 1330454496254;
var bytes:ByteArray = numberToInt64Bytes(n);
trace("bytes:",
bytes[0].toString(16),
bytes[1].toString(16),
bytes[2].toString(16),
bytes[3].toString(16),
bytes[4].toString(16),
bytes[5].toString(16),
bytes[6].toString(16),
bytes[7].toString(16)
);
Output:
bytes: 0 0 1 35 c5 44 83 fe
It should be useful for serializing data in AS3 later to be read by a Java program.
Homework assignment: Write int64BytesToNumber()

OCR: weighted Levenshtein distance

I'm trying to create an optical character recognition system with the dictionary.
In fact I don't have an implemented dictionary yet=)
I've heard that there are simple metrics based on Levenstein distance which take in account different distance between different symbols. E.g. 'N' and 'H' are very close to each other and d("THEATRE", "TNEATRE") should be less than d("THEATRE", "TOEATRE") which is impossible using basic Levenstein distance.
Could you help me locating such metric, please.
This might be what you are looking for: http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance (and kindly some working code is included in the link)
Update:
http://nlp.stanford.edu/IR-book/html/htmledition/edit-distance-1.html
Here is an example (C#) where weight of "replace character" operation depends on distance between character codes:
static double WeightedLevenshtein(string b1, string b2) {
b1 = b1.ToUpper();
b2 = b2.ToUpper();
double[,] matrix = new double[b1.Length + 1, b2.Length + 1];
for (int i = 1; i <= b1.Length; i++) {
matrix[i, 0] = i;
}
for (int i = 1; i <= b2.Length; i++) {
matrix[0, i] = i;
}
for (int i = 1; i <= b1.Length; i++) {
for (int j = 1; j <= b2.Length; j++) {
double distance_replace = matrix[(i - 1), (j - 1)];
if (b1[i - 1] != b2[j - 1]) {
// Cost of replace
distance_replace += Math.Abs((float)(b1[i - 1]) - b2[j - 1]) / ('Z'-'A');
}
// Cost of remove = 1
double distance_remove = matrix[(i - 1), j] + 1;
// Cost of add = 1
double distance_add = matrix[i, (j - 1)] + 1;
matrix[i, j] = Math.Min(distance_replace,
Math.Min(distance_add, distance_remove));
}
}
return matrix[b1.Length, b2.Length] ;
}
You see how it works here: http://ideone.com/RblFK
A few years too late but the following python package (with which I am NOT affiliated) allows for arbitrary weighting of all the Levenshtein edit operations and ASCII character mappings etc.
https://github.com/infoscout/weighted-levenshtein
pip install weighted-levenshtein
Also this one (also not affiliated):
https://github.com/luozhouyang/python-string-similarity
I've recently created a python package that does exactly that https://github.com/zas97/ocr_weighted_levenshtein.
In my Weigthed-Levenshtein implementation the distance between "THEATRE" and "TNEATRE" is 1.3 while the distance between "THEATRE" and "TOEATRE" is 1.42.
Other exemples are the d("O", "0") is 0.06 and d("e","c") is 0.57.
This distances have been calculated by running multiple ocrs in a synthetic dataset and doing statistics on the most common ocr errors. I hope it helps someone =)