AS3 Websocket Handshake - actionscript-3

I'm trying to build a AS3 socket server that can handshake with html5 websockets. I've base my code on this link https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-17
This is what i have using the same values as the example in the link:
import com.dynamicflash.util.Base64;
import com.adobe.crypto.SHA1;
function getKey():void{
var key:String = "dGhlIHNhbXBsZSBub25jZQ==258EAFA5-E914-47DA-95CA-C5AB0DC85B11";
key = SHA1.hash(key);
key = Base64.encode(key);
trace(key);
//traces YjM3YTRmMmNjMDYyNGYxNjkwZjY0NjA2Y2YzODU5NDViMmJlYzRlYQ== instead of s3pPLMBiTxaQ9kYGzzhZRbK+xOo="
}
Now the example states that the output should be :
Concretely, if as in the example above, |Sec-WebSocket-Key| header field had the value "dGhlIHNhbXBsZSBub25jZQ==", the server would concatenate the string "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" to form the string "dGhlIHNhbXBsZSBub25jZQ==258EAFA5-E914-47DA-95CA-C5AB0DC85B11". The server would then take the SHA-1 hash of this, giving the value 0xb3 0x7a 0x4f 0x2c 0xc0 0x62 0x4f 0x16 0x90 0xf6 0x46 0x06 0xcf 0x38 0x59 0x45 0xb2 0xbe 0xc4 0xea. This value is then base64-encoded (see Section 4 of [RFC4648]), to give the value "s3pPLMBiTxaQ9kYGzzhZRbK+xOo="
Am i missing something ??

Its a while since I've even read any ActionScript but shouldn't you replace
key = SHA1.hash(key);
key = Base64.encode(key);
with
key = SHA1.hashToBase64(key);
? The current code converts the sha1 hash (a byte array) into a string but its the original byte array you need to pass into the base64 encoder.

Let me know if this is of any help:
https://github.com/childoftv/as3-websocket-server

Related

How can I convert a bitstring to the binary form in Julia

I am using bitstring to perform an xor operation on the ith bit of a string:
string = bitstring(string ⊻ 1 <<i)
However the result will be a string, so I cannot continue with other i.
So I want to know how do I convert a bitstring (of the form “000000000000000000000001001”) to (0b1001)?
Thanks
You can use parse to create an integer from the string, and then use string (alt. bitstring)to go the other way. Examples:
julia> str = "000000000000000000000001001";
julia> x = parse(UInt, str; base=2) # parse as UInt from input in base 2
0x0000000000000009
julia> x == 0b1001
true
julia> string(x; base=2) # stringify in base 2
"1001"
julia> bitstring(x) # stringify as bits (64 bits since UInt64 is 64 bits)
"0000000000000000000000000000000000000000000000000000000000001001"
don't use bitstring. You can either do the math with a BitVector or just a UInt. No reason to bring a String into it.

Shouldn't TclInvalidateStringRep() reset length?

I have a doubt on the following code in TCL 8.6.8 source tclInt.h:
4277 #define TclInvalidateStringRep(objPtr) \
4278 if (objPtr->bytes != NULL) { \
4279 if (objPtr->bytes != tclEmptyStringRep) { \
4280 ckfree((char *) objPtr->bytes); \
4281 } \
4282 objPtr->bytes = NULL; \
4283 }
This macro is called by Tcl_InvalidateStringRep() in tclObj.c.
My doubt is, why doesn't tclObj's length get reset to zero?
Here is from definition of Tcl_Obj:
808 typedef struct Tcl_Obj {
809 int refCount; /* When 0 the object will be freed. */
810 char *bytes; /* This points to the first byte of the
811 * object's string representation. The array
812 * must be followed by a null byte (i.e., at
813 * offset length) but may also contain
814 * embedded null characters. The array's
815 * storage is allocated by ckalloc. NULL means
816 * the string rep is invalid and must be
817 * regenerated from the internal rep. Clients
818 * should use Tcl_GetStringFromObj or
819 * Tcl_GetString to get a pointer to the byte
820 * array as a readonly value. */
821 int length; /* The number of bytes at *bytes, not
822 * including the terminating null. */
So you can see length is tightly coupled with bytes, when bytes is cleared, shouldn't we reset length?
My doubt comes from the following code, TclCreateLiteral() in tclLiteral.c:
200 for (globalPtr=globalTablePtr->buckets[globalHash] ; globalPtr!=NULL;
201 globalPtr = globalPtr->nextPtr) {
202 objPtr = globalPtr->objPtr;
203 if ((globalPtr->nsPtr == nsPtr)
204 && (objPtr->length == length) && ((length == 0)
205 || ((objPtr->bytes[0] == bytes[0])
206 && (memcmp(objPtr->bytes, bytes, (unsigned) length) == 0)))) {
So at line 204, when length is not zero while bytes is NULL, the program crashes.
My product includes TCL source and I find the above problem when I trace a program crash. I put the workaround in our code, but like to confirm with the community if it indeed is a vulnerability.
Your aproach seems to be wrong somewhere.
The call of TclInvalidateStringRep is basically allowed for objects with no references (refCount == 0) or with exactly one reference (so refCount <= 1) and then only if you are sure, that this 1 reference is your own reference only.
Tcl's shared objects could switch its internal representation, but the string representation remains immutable. Otherwise you will break the basic principles of Tcl (like EIAS, etc).
Simplest example that can explain this:
set k 0x7f
dict set d $k test
expr {$k}; # ==> 127 (obj is integer now, but...)
puts $k; # ==> 0x7f (... still remains the string-representation)
puts [dict get $d $k]; # ==> test
# some code that fouls it up (despite of two references var `k` and key in dict `d`):
magic_happens_here $k; # string representation gets lost.
# and hereafter:
puts $k; # ==> 127 (representation is now 127, so...)
puts [dict get $d $k]; # ==> ERROR: key "127" not known in dictionary
As you can see, reset resp. altering of the string representation of shared object is wrong by design.
Please avoid this in Tcl.
I've had a think about this, and while I believe that the code that is purging the representation is wrong to do so (since the object should in principle be shared and so shouldn't be observed to change) I certainly think that it is extremely difficult to actually prove that that can't happen. For sure, TclCreateLiteral in tclLiteral.c shouldn't blow up if it happens!
The fix I'm using is to make TclCreateLiteral use TclGetStringFromObj (the Tcl-internal macro-ized version of Tcl_GetStringFromObj) to get the bytes and length fields instead of using them directly, so that the correct constraints are preserved. This should make the string representation exist once more if it is removed. If the code continues to crash, the problem is your code that is calling TclInvalidateStringRep on a literal (and setting a type that can't have a string generated for it; Tcl has some of those, but that's because it never purges the original string from them).
Remember, a Tcl_Obj should only have its string rep purged when it becomes wrong, not just when it gains a non-string representation. The fact a value has been interpreted as an integer doesn't mean that it shouldn't be interpretable as a list (quite the reverse!) and if the internal representation is never updated to a different value (in-place modifications should only ever happen to unshared objects) it should never need to lose that string representation at all.

How to lossless convert a double to string and back in Octave

When saving a double to a string there is some loss of precision. Even if you use a very large number of digits the conversion may not be reversible, i.e. if you convert a double x to a string sx and then convert back you will get a number x' which may not be bitwise equal to x. This may cause some problem for instance when checking for differences in a battery of tests. One possibility is to use binary form (for instance the native Binary form, or HDF5) but I want to store the number in a text file, so I need a conversion to a string. I have a working solution but I ask if there is some standard for this or a better solution.
In C/C++ you could cast the double to some integer type like char* and then convert each byte to an hexa of length 2 with printf("%02x",c[j]). Then for instance PI would be converted to a string of length 16: 54442d18400921fb. The problem with this is that if you read the hexa you don get any idea of which number it is. So I would be interested in some mix for instance pi -> 3.14{54442d18400921fb}. The first part is a (probably low precision) decimal representation of the number (typically I would use a "%g" output conversion) and the string in braces is the lossless hexadecimal representation.
EDIT: I pass the code as an aswer
Following the ideas already suggested in the post I wrote the
following functions, that seem to work.
function s = dbl2str(d);
z = typecast(d,"uint32");
s = sprintf("%.3g{%08x%08x}\n",d,z);
endfunction
function d = str2dbl(s);
k1 = index(s,"{");
k2 = index(s,"}");
## Check that there is a balanced {} or not at all
assert((k1==0) == (k2==0));
if k1>0; assert(k2>k1); endif
if (k1==0);
## If there is not {hexa} part convert with loss
d = str2double(s);
else
## Convert lossless
ss = substr(s,k1+1,k2-k1-1);
z = uint32(sscanf(ss,"%8x",2));
d = typecast(z,"double");
endif
endfunction
Then I have
>> spi=dbl2str(pi)
spi = 3.14{54442d18400921fb}
>> pi2 = str2dbl(spi)
pi2 = 3.1416
>> pi2-pi
ans = 0
>> snan = dbl2str(NaN)
snan = NaN{000000007ff80000}
>> nan1 = str2dbl(snan)
nan1 = NaN
A further improvement would be to use other type of enconding, for
instance Base64 (as suggested by #CrisLuengo in a comment) that would
reduce the length of the binary part from 16 to 11 bytes.

How to create a TCL variable of type bytearray

I am using TCL 8.4.20.
So I have the following code:
set a [binary format H2 1]
set b [binary format H2 2]
set c [binary format H2 3]
set bytes $a
append bytes $a
append bytes $b
append bytes $c
puts $bytes
I set a breakpoint at Tcl_PutsObjCmd() function in TCL's C source code and I see its argument, $bytes, is of type string while I expect it to be bytearray.
Question 1:Why is that? From the first assignment to the final appending, "bytes" accepts nothing but binary data.
The reason I do this experiment is, we have a TCL extension command in C, it expects the command argument is of byte array type - it has a check the value's typePtr should be tclByteArrayType. My TCL code currently fails on this command because the data passed to the command is of type string, just as demo'ed above.
I googled around, seems the "right" way to make a byte array object is to have every byte ready first and finally use one "binary format" command to put all into one. But it is a fairly big change to my current TCL code.
Question 2: Given that I already have a TCL variable whose data are all binaries (created using "binary format" for each byte and put together using "append") while its type is string, How can I change its internal type to "bytearray" through some TCL maneuvering?
Technically, the internal type is not a guaranteed property. Everything is a string. The code may shimmer a type away whenever it feels like. And code that depends on the internal type is usually very brittle or outright broken.
So your C code should call Tcl_GetByteArrayFromObj() instead of peeking at the arguments internals. That does the proper conversion if the object has not yet a byteArray representation.
About your questions:
Why doesn't append of two byte arrays keep the byte array type?
It does, at least for 8.6, if you do it right and never trigger the creation of a string rep.
Running this in tkcon, the append turns the value into a string:
() 98 % set a [binary format H2 1]

() 99 % set b [binary format H2 1]

() 100 % ::tcl::unsupported::representation $a
value is a bytearray with a refcount of 2, object pointer at 0000000005665420, internal representation 000000000587B280:0000000005665240, string representation ""
() 101 % ::tcl::unsupported::representation $b
value is a bytearray with a refcount of 2, object pointer at 000000000564EEB0, internal representation 000000000587B4A0:00000000056590E0, string representation ""
() 102 % set x $a

() 103 % ::tcl::unsupported::representation $x
value is a bytearray with a refcount of 4, object pointer at 0000000005665420, internal representation 000000000587B280:0000000005665240, string representation ""
() 104 % append x $b

() 105 % ::tcl::unsupported::representation $x
value is a string with a refcount of 3, object pointer at 0000000005663F50, internal representation 0000000005896BA0:000000000564F030, string representation ""
this happens, because the bytearray has a string rep (due to Tkcon echoing the value) created. The append optimization only works for 'pure' bytearrays, e.g. bytearrays that do not have a string rep. This is similar to some optimizations for 'pure' lists.
So it works like this, preventing the shimmering result echo:
() 106 % set b [binary format H2 1]; puts "pure"
pure
() 107 % set a [binary format H2 1]; puts "pure"
pure
() 108 % set x $a; puts "pure"
pure
() 109 % ::tcl::unsupported::representation $a
value is a bytearray with a refcount of 3, object pointer at 0000000005658780, internal representation 000000000587B320:0000000005658CF0, no string representation
() 110 % ::tcl::unsupported::representation $b
value is a bytearray with a refcount of 2, object pointer at 000000000564ED60, internal representation 000000000587B500:0000000005658750, no string representation
() 111 % ::tcl::unsupported::representation $x
value is a bytearray with a refcount of 3, object pointer at 0000000005658780, internal representation 000000000587B320:0000000005658CF0, no string representation
() 112 % append x $b; puts "pure"
pure
() 113 % ::tcl::unsupported::representation $x
value is a bytearray with a refcount of 2, object pointer at 0000000005658690, internal representation 00000000058A5C60:0000000005658960, no string representation
Note the no string representation part.
How to turn a string into a bytearray
Just do a binary format:
set x [binary format a* $x]

Working with 4 and 7 byte HEX values in actionscript

Have built an NFC-RFID reader interfacing an arduino with an Adobe AIR application.
My confusion is in how to deal with the 4 and 7 byte HEX UID values being returned
example
0xED 0xAD 0x8F 0x9A
or
0x04 0x70 0xE9 0x2A 0x42 0x2B 0x80
Converting a simple HEX value to decimal in AS3 is straightforward, namely
var decimal:int = parseInt("FFFFFF",16); // output : 16777215
But how would I 'massage' the returned RFID HEX values first BEFORE trying to convert using the parseInt method?