Try/catch not working during asset loading - exception

During the loading of a sound effect (in C#, XNA) using:
SoundEffect effect = Content.Load<SoundEffect>(location);
on certain computers with a specific sound setup, I get application crashes throwing an InvalidOperationException exception. The content manager seems properly initialized and the location is also correct (works fine on different computer).
A example audio file is available at: http://www.hybridbeasts.com/test.wav
Interestingy, a try/catch fails and still produces an application crash.
try
{
effect = Content.Load<SoundEffect>(location);
}
catch
{
Warning.Happened("Problem with audio playback detected. Sound automatically disabled");
DebugEngine.disableSound = true;
return;
}
What is wrong with the try/catch?
The call stack looks like this:
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Helpers.ThrowExceptionFromErrorCode(int error) + 0x3d bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Audio.SoundEffect.AllocateFormatAndData(byte[] format, byte[] data, int offset, int count) + 0x107 bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Audio.SoundEffect.Create(byte[] format, byte[] data, int offset, int count, int loopStart, int loopLength, System.TimeSpan duration) + 0x31 bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Audio.SoundEffect.SoundEffect(byte[] format, byte[] data, int loopStart, int loopLength, System.TimeSpan duration) + 0xd1 bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Content.SoundEffectReader.Read(Microsoft.Xna.Framework.Content.ContentReader input, Microsoft.Xna.Framework.Audio.SoundEffect existingInstance) + 0x124 bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Content.ContentReader.InvokeReader(Microsoft.Xna.Framework.Content.ContentTypeReader reader, object existingInstance) + 0xdf bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Content.ContentReader.ReadObjectInternal(object existingInstance) + 0xfd bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Content.ContentReader.ReadObject() + 0x4d bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Content.ContentReader.ReadAsset() + 0x88 bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Content.ContentManager.ReadAsset(string assetName, System.Action recordDisposableObject) + 0x129 bytes
Microsoft.Xna.Framework.dll!Microsoft.Xna.Framework.Content.ContentManager.Load(string assetName) + 0x2c7 bytes
Project_Ares.exe!Project_Ares.SoundEffectAres.Load(string location) Line 77 + 0x2c bytes C#

Related

Using I2C_master library AVR

I am using I2C_master library for AVR, Communication works fine, but I have little problem, how can I get data.
I am using this function
uint16_t i2c_2byte_readReg(uint8_t devaddr, uint16_t regaddr, uint8_t* data, uint16_t length){
devaddr += 1;
if (i2c_start(devaddr<<1|0)) return 1;
i2c_write(regaddr >> 8);
i2c_write(regaddr & 0xFF);
if (i2c_start(devaddr<<1| 1)) return 1;
for (uint16_t i = 0; i < (length-1); i++)
{
data[i] = i2c_read_ack();
}
data[(length-1)] = i2c_read_nack();
i2c_stop();
return 0;}
And now I need to use received data, and send it by UART to PC
uint8_t* DevId;
i2c_2byte_readReg(address,REVISION_CODE_DEVID,DevId,2);
deviceH=*DevId++;
deviceL=*DevId;
UART_send(deviceH);
UART_send(deviceL);
I think that I am lost with pointers. Could you help me, how can I get received data for future use? (UART works fine for me in this case, but it sends only 0x00 with this code)
The function i2c_2byte_readReg takes as a third argument a pointer to the buffer where the data will be written. Note that it must have size bigger than the forth argument called length. Your DevId pointer doesn't point to any buffer so when calling the function you've got an access violation.
To get the data you should define an array before calling the function:
const size_t size = 8;
uint8_t data[size];
Then you can call the function passing the address of the buffer as an argument (the name of the array is converted into its address):
const uin16_t length = 2;
i2c_2byte_readReg(address, REVISION_CODE_DEVID, data, length);
Assuming that the function works well those two bytes will be saved into data buffer. Remember that size must be bigger or equal to length argument.
Then you can send the data over UART:
UART_send(data[0]);
UART_send(data[1]);

When I send 127+ characters from chrome websocket, my golang server cannot see more than 126

I'm having a blast reinventing the wheel and playing with bits to implement a simple server. It's almost functional, but I'm not sure if this issue is my client or my server. Here is the function where I pass the resulting byte array from net.Conn Read
func readWsFrame(p []byte) {
// process first byte
b := p[0]
fmt.Printf("first byte: %b\n", b)
fin := b & 128 // hopefully 128, for fin
op := b & 15 // hopefully 1, for text
fmt.Printf("fin: %d\nop: %d\n", fin, op)
// process second byte
b = p[1]
fmt.Printf("second byte: %b\n", b)
masked := b & 128 // whether or not the payload is masked
length := b & 127 // payload length
fmt.Printf("masked: %d\nlength: %d\n", masked, length)
// process bytes 3-7 (masking key)
key := p[2:6]
// payload
d := p[6:]
if length == 126 {
key = p[4:8]
d = p[8:]
fmt.Println("med key")
} else if length == 127 {
key = p[10:14]
d = p[14:]
fmt.Println("big key")
} else {
fmt.Println("lil key")
}
fmt.Printf("masking key: %b\n", key)
fmt.Printf("masked data: %b\n", d)
var decoded []byte
for index := 0; index < int(length); index++ {
decoded = append(decoded, d[index]^key[index%4])
}
fmt.Printf("unmasked data: %b\n", decoded)
payload := string(decoded)
fmt.Println("payload: ", payload)
}
The client code is me having the dev console open right off this web page and running
var ws = new WebSocket("ws://localhost:16163");
ws.send("a".repeat(125))
ws.send("a".repeat(126))
ws.send("a".repeat(127))
ws.send("a".repeat(400))
My server is doing what I expect until I reach 127 characters. At that point, and every amount over 126, my 2nd byte is 11111110 and the length is 126. I can see the unmasked/encoded/magic message doesn't go beyond 126 a's.
I'm sure my go code is sub-par and there might be something obvious here, but I'm looking at the bits themselves, and I can see a 0 where I am expecting a 1, please help me, thank you!
I saw a similar question about writing messages larger than 126 bytes and how I'll need extra bytes for payload size, but in this case my client is the chrome web browser.
--edit:
I realize that I will never see more than 126 characters based on the loop I have there, but I should still see a 1 in the final bit in the second byte for these larger messages, right?
--edit:
Came across this how to work out payload size from html5 websocket
I guess I misunderstood everything else I was searching for. Can someone confirm this? If the length is <126, the length is byte & 127. If the length is 126, the length is the value of the next 2 bytes. If the length is 127, the length is the next 4 bytes.
I thought initially that the length would be 127 if it payload length was 127+ hah, oops. So when the length is 126 or 127, the 2nd byte is not part of the actual length? I'll probably confirm all of this with testing, but I thank you all for resolving this issue before the weekend so I can finish this side project.
The code should update the length property after realizing it's 127 or 126 by reading the length data in the bytes that follow the initial length indicator.
I would consider those 7 "length" bits slightly differently. They don't really indicate length as much as they indicate encoding.
If the length is encoded using 64bits (8 bytes), than the length indicator == 127.
If the length is encoded in 2 bytes, than the indicator == 126.
Otherwise, the length is encoded in the 7 bits of the indicator itself.
For example, the length 60 can be encoded in all three ways (albeit, some use more space).
There's a C Websocket implementation here if you want to read an example decoding of the length.
Good Luck!

C - pass array as parameter and change size and content

UPDATE: I solved my problem (scroll down).
I'm writing a small C program and I want to do the following:
The program is connected to a mysql database (that works perfectly) and I want to do something with the data from the database. I get about 20-25 rows per query and I created my own struct, which should contain the information from each row of the query.
So my struct looks like this:
typedef struct {
int timestamp;
double rate;
char* market;
char* currency;
} Rate;
I want to pass an empty array to a function, the function should calculate the size for the array based on the returned number of rows of the query. E.g. there are 20 rows which are returned from a single SQL query, so the array should contain 20 objectes of my Rate struct.
I want something like this:
int main(int argc, char **argv)
{
Rate *rates = ?; // don't know how to initialize it
(void) do_something_with_rates(&rates);
// the size here should be ~20
printf("size of rates: %d", sizeof(rates)/sizeof(Rate));
}
How does the function do_something_with_rates(Rate **rates) have to look like?
EDIT: I did it as Alex said, I made my function return the size of the array as size_t and passed my array to the function as Rate **rates.
In the function you can access and change the values like (*rates)[i].timestamp = 123 for example.
In C, memory is either dynamically or statically allocated.
Something like int fifty_numbers[50] is statically allocated. The size is 50 integers no matter what, so the compiler knows how big the array is in bytes. sizeof(fifty_numbers) will give you 200 bytes here.
Dynamic allocation: int *bunch_of_numbers = malloc(sizeof(int) * varying_size). As you can see, varying_size is not constant, so the compiler can't figure out how big the array is without executing the program. sizeof(bunch_of_numbers) gives you 4 bytes on a 32 bit system, or 8 bytes on a 64 bit system. The only one that know how big the array is would be the programmer. In your case, it's whoever wrote do_something_with_rates(), but you're discarding that information by either not returning it, or taking a size parameter.
It's not clear how do_something_with_rates() was declared exactly, but something like: void do_something_with_rates(Rate **rates) won't work as the function has no idea how big rates is. I recommend something like: void do_something_with_rates(size_t array_size, Rate **rates). At any rate, going by your requirements, it's still a ways away from working. Possible solutions are below:
You need to either return the new array's size:
size_t do_something_with_rates(size_t old_array_size, Rate **rates) {
Rate **new_rates;
*new_rates = malloc(sizeof(Rate) * n); // allocate n Rate objects
// carry out your operation on new_rates
// modifying rates
free(*rates); // releasing the memory taken up by the old array
*rates = *new_rates // make it point to the new array
return n; // returning the new size so that the caller knows
}
int main() {
Rate *rates = malloc(sizeof(Rate) * 20);
size_t new_size = do_something_with_rates(20, &rates);
// now new_size holds the size of the new array, which may or may not be 20
return 0;
}
Or pass in a size parameter for the function to set:
void do_something_with_rates(size_t old_array_size, size_t *new_array_size, Rate **rates) {
Rate **new_rates;
*new_rates = malloc(sizeof(Rate) * n); // allocate n Rate objects
*new_array_size = n; // setting the new size so that the caller knows
// carry out your operation on new_rates
// modifying rates
free(*rates); // releasing the memory taken up by the old array
*rates = *new_rates // make it point to the new array
}
int main() {
Rate *rates = malloc(sizeof(Rate) * 20);
size_t new_size;
do_something_with_rates(20, &new_size, &rates);
// now new_size holds the size of the new array, which may or may not be 20
return 0;
}
Why do I need to pass the old size as a parameter?
void do_something_with_rates(Rate **rates) {
// You don't know what n is. How would you
// know how many rate objects the caller wants
// you to process for any given call to this?
for (size_t i = 0; i < n; ++i)
// carry out your operation on new_rates
}
Everything changes when you have a size parameter:
void do_something_with_rates(size_t size, Rate **rates) {
for (size_t i = 0; i < size; ++i) // Now you know when to stop
// carry out your operation on new_rates
}
This is a very fundamental flaw with your program.
I want to also want the function to change the contents of the array:
size_t do_something_with_rates(size_t old_array_size, Rate **rates) {
Rate **new_rates;
*new_rates = malloc(sizeof(Rate) * n); // allocate n Rate objects
// carry out some operation on new_rates
Rate *array = *new_rates;
for (size_t i = 0; i < n; ++i) {
array[i]->timestamp = time();
// you can see the pattern
}
return n; // returning the new size so that the caller knows
}
sizeof produces a value (or code to produce a value) of the size of a type or the type of an expression at compile time. The size of an expression can therefore not change during the execution of the program. If you want that feature, use a variable, terminal value or a different programming language. Your choice. Whatever. C's better than Java.
char foo[42];
foo has either static storage duration (which is only partially related to the static keyword) or automatic storage duration.
Objects with static storage duration exist from the start of the program to the termination. Those global variables are technically called variables declared at file scope that have static storage duration and internal linkage.
Objects with automatic storage duration exist from the beginning of their initialisation to the return of the function. These are usually on the stack, though they could just as easily be on the graph. They're variables declared at block scope that have automatic storage duration and internal linkage.
In either case, todays compilers will encode 42 into the machine code. I suppose it'd be possible to modify the machine code, though that several thousands of lines you put into that task would be much better invested into storing the size externally (see other answer/s), and this isn't really a C question. If you really want to look into this, the only examples I can think of that change their own machine code are viruses... How are you going to avoid that antivirus heuristic?
Another option is to encode size information into a struct, use a flexible array member and then you can carry both the array and the size around as one allocation. Sorry, this is as close as you'll get to what you want. e.g.
struct T_vector {
size_t size;
T value[];
};
struct T_vector *T_make(struct T_vector **v) {
size_t index = *v ? (*v)->size++ : 0, size = index + 1;
if ((index & size) == 0) {
void *temp = realloc(*v, size * sizeof *(*v)->value);
if (!temp) {
return NULL;
}
*v = temp;
// (*v)->size = size;
*v = 42; // keep reading for a free cookie
}
return (*v)->value + index;
}
#define T_size(v) ((v) == NULL ? 0 : (v)->size)
int main(void) {
struct T_vector *v = NULL; T_size(v) == 0;
{ T *x = T_make(&v); x->value[0]; T_size(v) == 1;
x->y = y->x; }
{ T *y = T_make(&v); x->value[1]; T_size(v) == 2;
y->x = x->y; }
free(v);
}
Disclaimer: I only wrote this as an example; I don't intend to test or maintain it unless the intent of the example suffers drastically. If you want something I've thoroughly tested, use my push_back.
This may seem innocent, yet even with that disclaimer and this upcoming warning I'll likely see a comment along the lines of: Each successive call to make_T may render previously returned pointers invalid... True, and I can't think of much more I could do about that. I would advise calling make_T, modifying the value pointed at by the return value and discarding that pointer, as I've done above (rather explicitly).
Some compilers might even allow you to #define sizeof(x) T_size(x)... I'm joking; don't do this. Do it, mate; it's awesome!
Technically we aren't changing the size of an array here; we're allocating ahead of time and where necessary, reallocating and copying to a larger array. It might seem appealing to abstract allocation away this way in C at times... enjoy :)

ActionScript 3.0 - Null Bytes in ByteArray

I am trying to understand the significance of null bytes in a ByteArray. Do they act like a terminator? I mean, can we not write further into the ByteArray once a null byte has been written?
For instance,
import flash.utils.*;
public class print3r{
public function print3r{
Util.print(nullout());
}
public function nullout:ByteArray (){
var bytes:ByteArray = new ByteArray();
bytes.writeInt(((403705888 + 1) - 1)); // Non Printable Characters
bytes.writeInt(((403705872 - 1) + 1)); // Non Printable Characters
bytes.writeInt(0x18101000); // Notice the NullByte in this DWORD
bytes.writeInt(0x41424344); // ASCII Characters ABCD
return bytes;
}
}
new print3r;
This gives a blank output.
Now, if I replace the DWORD, 0x18101000 with 0x18101010, this time I can see the ASCII padding, ABCD in the output.
My question is that, is it possible to write past the null byte into the ByteArray()?
The reason I ask is because I have seen in an ActionScript code, that a lot of writeInt and writeByte operations are performed on the ByteArray even after the null byte is written.
Thanks.
is it possible to write past the null byte into the ByteArray()?
Of course it is. ByteArray -- is a chunk of raw data. You can write whatever you like there, and you can read in whatever way you like (using zero bytes as delimiters or whatever else you may want to do).
What you see when you send your bytes to standard output with trace(), depends solely on what you actually do with your data to convert it to a string. There are several ways of converting an array of bytes to string. So, your question is missing the explanation of what Util.print() method does.
Here are several options for converting bytes to a string:
Loop through bytes and output characters, encoding is up to you.
Read a string with ByteArray.readUTFBytes(). This method reads utf-encoded symbols; it stops when zero character is encountered.
Read a string with ByteArray.readUTF(). This method expects your string to be prefixed with unsigned short indicating its length. In other terms it is the same as ByteArray.readUTFBytes().
Use ByteArray.toString(). This is what happens when you simply do trace(byteArray);. This method ignores zero bytes and outputs the rest. This method uses System.useCodePage setting to decide on the encoding, and can use UTF BOM if the data begins with it.
Here are some tests that illustrate the above:
var test:ByteArray = new ByteArray();
// latin (1 byte per character)
test.writeUTFBytes("ABC");
// zero byte
test.writeByte(0);
// cyrillic (2 bytes per character)
test.writeUTFBytes("\u0410\u0411\u0412");
trace(test); // ABCАБВ
trace(test.toString()); // ABCАБВ
test.position = 0;
trace(test.readUTFBytes(test.length)); // ABC
// simple loop
var output:String = "";
var byte:uint;
for (var i:uint = 0; i<test.length; i+=1) {
byte = uint(test[i]);
if (output.length && i%4 == 0) {
output += " ";
}
output += (byte > 0xF ? "" : "0") + byte.toString(16);
}
trace(output); // 41424300 d090d091 d092
Writing a null to a byte array has no significance as far as I know. The print function might however use it as a string terminator.

RTMP_Write function use

I'm trying to use the librtmp library and it worked pretty well to pull a stream. But now I am trying to publish a stream and for that I believe I have to use the RTMP_Write function.
What I am trying to accomplish here is a simple c++ program that will read from a file and try to push the stream to a crtmp server. The connection and stream creation is ok, but I'm quite puzzled by the use of RTMP_Write.
Here is what I did:
int Upload(RTMP * rtmp, FILE * file){
int nRead = 0;
unsigned int nWrite = 0;
int diff = 0;
int bufferSize = 64 * 1024;
int byteSum = 0;
int count = 0;
char * buffer;
buffer = (char *) malloc(bufferSize);
do{
nRead = fread(buffer+diff,1,bufferSize-diff,file);
if(nRead != bufferSize){
if(feof(file)){
RTMP_LogPrintf("End of file reached!\n");
break;
}else if(ferror(file)){
RTMP_LogPrintf("Error reading from file stream detected\n");
break;
}
}
count += 1;
byteSum += nRead;
RTMP_LogPrintf("Read %d from file, Sum: %d, Count: %d\n",nRead,byteSum,count);
nWrite = RTMP_Write(rtmp,buffer,nRead);
if(nWrite != nRead){
diff = nRead - nWrite;
memcpy(buffer,(const void*)(buffer+bufferSize-diff),diff);
}
}while(!RTMP_ctrlC && RTMP_IsConnected(rtmp) && !RTMP_IsTimedout(rtmp));
free(buffer);
return RD_SUCCESS;
}
In this Upload function I am receiving the already initiallized RTMP structure and a pointer to an open file.
This actually works and I can see some video being displayed, but it soon gets lost and stops sending packages. I managed to understand that it happens whenever the buffer that I setup (and which I randomly required to be 64k, no special reason for that) happens to split the flv tag (http://osflash.org/flv#flv_format) of a new package.
For that I modified the RTMP_Write function and told it to verify if it will be able to decode the whole flv tag (packet type, body size, timestamp, etc..) and if it will not, then it should just return the amount of useful bytes left in the buffer.
if(s2 - 11 <= 0){
rest = size - s2;
return rest;
}
The code above takes notice of this, and if the value returned by RTMP_Write is not the amount of bytes it was supposed to send, then it knows that value is the amount of useful bytes left in the buffer. I then copy these bytes to the beginning of the buffer and read more from the file.
But I keep getting problems with it, so I was wondering: what is the correct use of this function anyway? is there a specific buffer value that I should be using? (don't think so) or is it buggy by itself?