When I send 127+ characters from chrome websocket, my golang server cannot see more than 126 - google-chrome

I'm having a blast reinventing the wheel and playing with bits to implement a simple server. It's almost functional, but I'm not sure if this issue is my client or my server. Here is the function where I pass the resulting byte array from net.Conn Read
func readWsFrame(p []byte) {
// process first byte
b := p[0]
fmt.Printf("first byte: %b\n", b)
fin := b & 128 // hopefully 128, for fin
op := b & 15 // hopefully 1, for text
fmt.Printf("fin: %d\nop: %d\n", fin, op)
// process second byte
b = p[1]
fmt.Printf("second byte: %b\n", b)
masked := b & 128 // whether or not the payload is masked
length := b & 127 // payload length
fmt.Printf("masked: %d\nlength: %d\n", masked, length)
// process bytes 3-7 (masking key)
key := p[2:6]
// payload
d := p[6:]
if length == 126 {
key = p[4:8]
d = p[8:]
fmt.Println("med key")
} else if length == 127 {
key = p[10:14]
d = p[14:]
fmt.Println("big key")
} else {
fmt.Println("lil key")
}
fmt.Printf("masking key: %b\n", key)
fmt.Printf("masked data: %b\n", d)
var decoded []byte
for index := 0; index < int(length); index++ {
decoded = append(decoded, d[index]^key[index%4])
}
fmt.Printf("unmasked data: %b\n", decoded)
payload := string(decoded)
fmt.Println("payload: ", payload)
}
The client code is me having the dev console open right off this web page and running
var ws = new WebSocket("ws://localhost:16163");
ws.send("a".repeat(125))
ws.send("a".repeat(126))
ws.send("a".repeat(127))
ws.send("a".repeat(400))
My server is doing what I expect until I reach 127 characters. At that point, and every amount over 126, my 2nd byte is 11111110 and the length is 126. I can see the unmasked/encoded/magic message doesn't go beyond 126 a's.
I'm sure my go code is sub-par and there might be something obvious here, but I'm looking at the bits themselves, and I can see a 0 where I am expecting a 1, please help me, thank you!
I saw a similar question about writing messages larger than 126 bytes and how I'll need extra bytes for payload size, but in this case my client is the chrome web browser.
--edit:
I realize that I will never see more than 126 characters based on the loop I have there, but I should still see a 1 in the final bit in the second byte for these larger messages, right?
--edit:
Came across this how to work out payload size from html5 websocket
I guess I misunderstood everything else I was searching for. Can someone confirm this? If the length is <126, the length is byte & 127. If the length is 126, the length is the value of the next 2 bytes. If the length is 127, the length is the next 4 bytes.
I thought initially that the length would be 127 if it payload length was 127+ hah, oops. So when the length is 126 or 127, the 2nd byte is not part of the actual length? I'll probably confirm all of this with testing, but I thank you all for resolving this issue before the weekend so I can finish this side project.

The code should update the length property after realizing it's 127 or 126 by reading the length data in the bytes that follow the initial length indicator.
I would consider those 7 "length" bits slightly differently. They don't really indicate length as much as they indicate encoding.
If the length is encoded using 64bits (8 bytes), than the length indicator == 127.
If the length is encoded in 2 bytes, than the indicator == 126.
Otherwise, the length is encoded in the 7 bits of the indicator itself.
For example, the length 60 can be encoded in all three ways (albeit, some use more space).
There's a C Websocket implementation here if you want to read an example decoding of the length.
Good Luck!

Related

Converting Big endian values that i receive on Modbus to float value

I am writing some code for Modbus RTU but having a problem converting data received.
below is the code and i am able to communicate with Slave device, however the info i receive back does not make sense at all.
The slave address is 2000 ( hex value) and Datablocks are 2 and Hex response is Float - Big Endian (ABCD) .
However when i view via serial print it makes no sense. Anyone that can help would be greatly appreciated.
void loop()
{
uint8_t j, result;
uint16_t data[6];
// slave: read (6) 16-bit registers starting at register .. to RX buffer , this address is in Decimal, so convert hex to decimal to use correct address
result = node.readHoldingRegisters(8192, 2);
// do something with data if read is successful
if (result == node.ku8MBSuccess)
{
for (j = 0; j < 6; j++)
{
data[j] = node.getResponseBuffer(j);
Serial.println(data[j]);
}
}
delay(1000);
}

Using I2C_master library AVR

I am using I2C_master library for AVR, Communication works fine, but I have little problem, how can I get data.
I am using this function
uint16_t i2c_2byte_readReg(uint8_t devaddr, uint16_t regaddr, uint8_t* data, uint16_t length){
devaddr += 1;
if (i2c_start(devaddr<<1|0)) return 1;
i2c_write(regaddr >> 8);
i2c_write(regaddr & 0xFF);
if (i2c_start(devaddr<<1| 1)) return 1;
for (uint16_t i = 0; i < (length-1); i++)
{
data[i] = i2c_read_ack();
}
data[(length-1)] = i2c_read_nack();
i2c_stop();
return 0;}
And now I need to use received data, and send it by UART to PC
uint8_t* DevId;
i2c_2byte_readReg(address,REVISION_CODE_DEVID,DevId,2);
deviceH=*DevId++;
deviceL=*DevId;
UART_send(deviceH);
UART_send(deviceL);
I think that I am lost with pointers. Could you help me, how can I get received data for future use? (UART works fine for me in this case, but it sends only 0x00 with this code)
The function i2c_2byte_readReg takes as a third argument a pointer to the buffer where the data will be written. Note that it must have size bigger than the forth argument called length. Your DevId pointer doesn't point to any buffer so when calling the function you've got an access violation.
To get the data you should define an array before calling the function:
const size_t size = 8;
uint8_t data[size];
Then you can call the function passing the address of the buffer as an argument (the name of the array is converted into its address):
const uin16_t length = 2;
i2c_2byte_readReg(address, REVISION_CODE_DEVID, data, length);
Assuming that the function works well those two bytes will be saved into data buffer. Remember that size must be bigger or equal to length argument.
Then you can send the data over UART:
UART_send(data[0]);
UART_send(data[1]);

Exponent Binary Numbers

Could someone tell me the logic behind exponenting binary numbers? For example, I want to take 110^10, but I don't know the logic behind it. If someone could supply me with that, it'd be a great help.. (And I want it to be done in pure binary with no conversions and no looping multiplication. Just logic...)
peenut is correct in that exponentiation doesn't care what base you're representing your numbers in, and I don't know what you mean by "just logic," but here's a stab at it.
A quick search over at Wikipedia reveals this algorithm. The basic ideas is to square your base, store the result, and then square the result and repeat. This will give you the factors of your answer, which you can then multiply together. I think of it as a "binary search"-flavored exponentiation algorithm since you can skip a lot of intermediate steps by squaring and storing.
Binary exponents are very easy. They are simply additions and shifts only.
the number 110 is where you start.
Working backwards from the number 10 - (i.e. 0) - it's a zero, so this means "do not add it in."
Now you shift left - so 110 becomes 1100
Now you work on the next bit of the 10 (i.e. 1) - it's a one, so this means "add this to the result" - it's 0 so far, because we didn't already add it, so the result is now 1100
there are no more bits to do - so the answer is 1100
If you were doing 110^110 - you would have one more to do - so - you again shift and get 11000 now.
The last bit is again a one, so now you add:
1100 +
11000 =
100100
110^10=1100 i.e. 6^2=12
110^110=100100 i.e. 6^6=36
Exponentiation is operation that is independent of actual textual representation of number (e.g. in base 2 - binary, base 10 - decimal).
Maybe you want to ask about binary XOR (eXclusive OR) operation?
Unfortunately the easiest way for your computer to handle simple exponents is your "looping multiplication" (or the naïve approach), which is the most rudimentary (and literal) way of handling it. As #user1561358 commented, it is NOT just binary adds and shifts. That is multiplication. To raise 66 (110110) the naïve approach has you multiplying the base n times (as below):
110
x 110
--------------
100100 = 36
x 110
--------------
11011000 = 216
x 110
--------------
10100010000 = 1296
x 110
--------------
1111001100000 = 7776
x 110
--------------
01011011001000000 = 46656
The simple code for a naïve multiplication is elegant for most applications:
long long binpow(long long a, long long b) {
if (b == 0)
return 1;
long long res = binpow(a, b / 2);
if (b % 2)
return res * res * a;
else
return res * res;
}
For larger or arbitrary exponents you can dramatically reduce the number of calculations by applying Horner's Method, explained in great detail in this video specifically calculating binary exponents.
In essence, you are just multiplying the bits with non-zero exponents. Let's look at 11021102, (or 66):
11021102 breaks down into the following exponents:
There is no "1" bit set so 61 won't be multiplied, but we do have the two and four bits to calculate:
6102 = 36
61002 = 1296
So, 66 = 36 x 1296 = 46656
The above code can be modified only slightly to check for non-zero exponents with a while {.. test:
long long binpow(long long a, long long b) {
long long res = 1;
while (b > 0) {
if (b & 1)
res = res * a;
a = a * a;
b >>= 1;
}
return res;
}
To really see the advantage of this let's try the binary exponentiation of
11121000000002, which is 7256.
The naïve approach would require us to make 256 multiplication iterations!
Instead, all the exponents except 2256 are zero, so they are skipped in the while loop. There is one single iterative calculation where a * a happens 256 times:
11121000000002 = (a 718 digit binary beginning with 11001101011....)
728 = 2213595400046048155450188615474945937162517050260073069916366390524704974007989996848003433837940380782794455262312607598867363425940560014856027866381946458951205837379116473663246733509680721264246243189632348313601

ActionScript 3.0 - Null Bytes in ByteArray

I am trying to understand the significance of null bytes in a ByteArray. Do they act like a terminator? I mean, can we not write further into the ByteArray once a null byte has been written?
For instance,
import flash.utils.*;
public class print3r{
public function print3r{
Util.print(nullout());
}
public function nullout:ByteArray (){
var bytes:ByteArray = new ByteArray();
bytes.writeInt(((403705888 + 1) - 1)); // Non Printable Characters
bytes.writeInt(((403705872 - 1) + 1)); // Non Printable Characters
bytes.writeInt(0x18101000); // Notice the NullByte in this DWORD
bytes.writeInt(0x41424344); // ASCII Characters ABCD
return bytes;
}
}
new print3r;
This gives a blank output.
Now, if I replace the DWORD, 0x18101000 with 0x18101010, this time I can see the ASCII padding, ABCD in the output.
My question is that, is it possible to write past the null byte into the ByteArray()?
The reason I ask is because I have seen in an ActionScript code, that a lot of writeInt and writeByte operations are performed on the ByteArray even after the null byte is written.
Thanks.
is it possible to write past the null byte into the ByteArray()?
Of course it is. ByteArray -- is a chunk of raw data. You can write whatever you like there, and you can read in whatever way you like (using zero bytes as delimiters or whatever else you may want to do).
What you see when you send your bytes to standard output with trace(), depends solely on what you actually do with your data to convert it to a string. There are several ways of converting an array of bytes to string. So, your question is missing the explanation of what Util.print() method does.
Here are several options for converting bytes to a string:
Loop through bytes and output characters, encoding is up to you.
Read a string with ByteArray.readUTFBytes(). This method reads utf-encoded symbols; it stops when zero character is encountered.
Read a string with ByteArray.readUTF(). This method expects your string to be prefixed with unsigned short indicating its length. In other terms it is the same as ByteArray.readUTFBytes().
Use ByteArray.toString(). This is what happens when you simply do trace(byteArray);. This method ignores zero bytes and outputs the rest. This method uses System.useCodePage setting to decide on the encoding, and can use UTF BOM if the data begins with it.
Here are some tests that illustrate the above:
var test:ByteArray = new ByteArray();
// latin (1 byte per character)
test.writeUTFBytes("ABC");
// zero byte
test.writeByte(0);
// cyrillic (2 bytes per character)
test.writeUTFBytes("\u0410\u0411\u0412");
trace(test); // ABCАБВ
trace(test.toString()); // ABCАБВ
test.position = 0;
trace(test.readUTFBytes(test.length)); // ABC
// simple loop
var output:String = "";
var byte:uint;
for (var i:uint = 0; i<test.length; i+=1) {
byte = uint(test[i]);
if (output.length && i%4 == 0) {
output += " ";
}
output += (byte > 0xF ? "" : "0") + byte.toString(16);
}
trace(output); // 41424300 d090d091 d092
Writing a null to a byte array has no significance as far as I know. The print function might however use it as a string terminator.

RTMP_Write function use

I'm trying to use the librtmp library and it worked pretty well to pull a stream. But now I am trying to publish a stream and for that I believe I have to use the RTMP_Write function.
What I am trying to accomplish here is a simple c++ program that will read from a file and try to push the stream to a crtmp server. The connection and stream creation is ok, but I'm quite puzzled by the use of RTMP_Write.
Here is what I did:
int Upload(RTMP * rtmp, FILE * file){
int nRead = 0;
unsigned int nWrite = 0;
int diff = 0;
int bufferSize = 64 * 1024;
int byteSum = 0;
int count = 0;
char * buffer;
buffer = (char *) malloc(bufferSize);
do{
nRead = fread(buffer+diff,1,bufferSize-diff,file);
if(nRead != bufferSize){
if(feof(file)){
RTMP_LogPrintf("End of file reached!\n");
break;
}else if(ferror(file)){
RTMP_LogPrintf("Error reading from file stream detected\n");
break;
}
}
count += 1;
byteSum += nRead;
RTMP_LogPrintf("Read %d from file, Sum: %d, Count: %d\n",nRead,byteSum,count);
nWrite = RTMP_Write(rtmp,buffer,nRead);
if(nWrite != nRead){
diff = nRead - nWrite;
memcpy(buffer,(const void*)(buffer+bufferSize-diff),diff);
}
}while(!RTMP_ctrlC && RTMP_IsConnected(rtmp) && !RTMP_IsTimedout(rtmp));
free(buffer);
return RD_SUCCESS;
}
In this Upload function I am receiving the already initiallized RTMP structure and a pointer to an open file.
This actually works and I can see some video being displayed, but it soon gets lost and stops sending packages. I managed to understand that it happens whenever the buffer that I setup (and which I randomly required to be 64k, no special reason for that) happens to split the flv tag (http://osflash.org/flv#flv_format) of a new package.
For that I modified the RTMP_Write function and told it to verify if it will be able to decode the whole flv tag (packet type, body size, timestamp, etc..) and if it will not, then it should just return the amount of useful bytes left in the buffer.
if(s2 - 11 <= 0){
rest = size - s2;
return rest;
}
The code above takes notice of this, and if the value returned by RTMP_Write is not the amount of bytes it was supposed to send, then it knows that value is the amount of useful bytes left in the buffer. I then copy these bytes to the beginning of the buffer and read more from the file.
But I keep getting problems with it, so I was wondering: what is the correct use of this function anyway? is there a specific buffer value that I should be using? (don't think so) or is it buggy by itself?