What does End Of File on a socket mean? - actionscript-3

Using Action Script 3 in Flex Builder 3.
When handling a SOCKET_DATA event, I occasionally, seemingly at random, get an "Error #2030: End of file was encountered." when calling socket.readInt(). I'm confused as to what this error means, since I'm not reading a file? I'm a little unfamilier with sockets. Thanks.

An end-of-file error typically means the other side of the socket has closed their connection, IIRC.
The reason it's end-of-file is that at a very low level within a program, a file on the disk and a socket are both represented with a number -- a file descriptor -- that the OS translates into the object representing a file or socket or pipe or whatever.
Usually, you can avoid this kind of error by checking if you just read in an EOF. If you did read an EOF, and you try reading from the socket/file again, then you will get an EOF error.
Update: According to the ActionScript 9.0 documentation, you do indeed get a close event if the other end closes the socket.

When reading off a socket, that is closed, you will get: Error #2002: Operation attempted on invalid socket.
End-of-file errors typically occur on any bytestreams, if you read more bytes than available. This is the case for files, sockets, etc. In the case of flash, it occurs when reading from a Socket or a ByteArray and maybe even in other cases.
TCP/IP is package based, but emulates a stream, thus you can only read the data off the stream, that was already sent to you with TCP packages. Check Socket::bytesAvailable to find out, how many bytes are currently available. Always keep in mind, that the data you write to the socket in one operation, may arrive in multiple packages, each very probably causing flash player to trigger socketData events.

Related

Crash reporting and C++ Coroutines?

I use a crash reporting feature that allows the user to submit a crash report if the application crashed with an uncaught exception.
After adopting C++20 coroutines entered the application.
If there is an unexpected exception thrown in a coroutine the exception is caught before it is rethrown.
This causes crashreports to not show the stacktrace needed to figure out what happened, but only the stacktrace to the coroutine that rethrew the exception. This basically makes any crash reporting useless.
As far as I could find there is no way to prevent the catching of any exceptions by the coroutine because it is a required part of the design.
Is there a way to improve this I cant see?
I am curious because I found nobody else complaining yet. :->
Edit: To clarify the app is running on Windows, I mean the stacktrace of a minidump that is created at the point of the unhandled exception using: SetUnhandledExceptionFilter + MiniDumpWriteDump
C++ does not have standard stack tracing yet, so there is no nice builtin way to do this.
However, there are ways, which rely on keeping information in the promise objects.
Clang has documentation for some common debugging methods for coroutines.
The best solution we have found is as follows (Windows specific!):
Until now we used SetUnhandledExceptionFilter at the start of the app to set an exceptionfilter function that writes a minidump.
Instead we now use _set_se_translator.
If we want the program to just crash (f.e. if windows is set to write dumps) we set a function which calls std::abort.
If we want to handle it interactively we set a function which asks the user whether to send a minidump, the dump is written as before at this point.
Both cases provide the full callstack in the dump.
The only downside remaining is we cant let the program crash for "normal" exceptions to dump, this was possible before. But the "most important" exceptions (f.e. access violations) work.

when will SetUnhandledExceptionFilter not work? e.g. stack corruptions?

I would like to have my code create a dump for unhandled exceptions.
I'd thought of using the SetUnhandledExceptionFilter. But what are the cases when SetUnhandledExceptionFilter may not work as expected. For example what about stack corruption issues when, for instance, a buffer overrun occurs on stack?
what will happen in this case? are there any additional solutions which will always work?
I've been using SetUnhandledExceptionFilter for quite a while and have not noticed any crashes/problems that are not trapped correctly. And, if an exception is not handled somewhere in the code, it should get handled by the filter. From MSDN regarding the filter...
After calling this function, if an exception occurs in a process that
is not being debugged, and the exception makes it to the unhandled
exception filter, that filter will call the exception filter function
specified by the lpTopLevelExceptionFilter parameter.
There is no mention that the above applies to only certain types of exceptions.
I don't use the filter to create a dump file because the application uses the Microsoft WER system to report crashes. Rather, the filter is used to provide an opportunity to collect two additional files to attach to the crash report (and dump file) that Microsoft will collect.
Here's an example of Microsoft's crash report dashboard for the application with module names redacted.
You'll see that there's a wide range of crash types collected, including, stack buffer overrun.
Also make sure no other code calls the SetUnhandledExceptionFilter() after you set it to your handler.
I had a similar issue and in my case it was caused by another linked library (ImageMagick) which called SetUnhandledExceptionFilter() from its Magick::InitializeMagick() which was called just in some situations in our application. Then it replaced our handler with ImageMagick's handler.
I found it by setting a breakpoint on SetUnhandledExceptionFilter() in gdb and checked the backtrace.

Open a File in D

If I want to safely try to open a file in D, is the preferred way to either
try to open it, catch exception (and optionally figure out why) if it fails or
check if it exists, is readable and only then open it
I'm guessing the second alternative results in more IO and is more complex right?
If the file is expected to be there according to normal program operation and the given user input, then use 1 - just try to open the file and rely on exception handling to handle the exceptional situation that the file is not there.
For example:
/// If the user has a local configuration file in his home directory, open that.
/// Otherwise, open the global configuration file that is a part of the program,
/// and should be installed on all systems where the program is running.
File configFile;
if ("~/.transmogrifier.conf".expandTilde.exists)
configFile.open("~/.transmogrifier.conf".expandTilde);
else
configFile.open("/etc/transmogrifier.conf");
Note that using 2 might lead to security issues in your program. For example, if the file is present at the moment when your program checks if the file exists, but is gone when it tries to open it, your program may behave in an unexpected way. If you use 2, make sure that your program still behaves in a desirable way if opening the file fails even though your program just checked that the file exists and is readable.
Generally, it's better to check whether the file exists first, because it's often very likely that the file doesn't exist, and simply letting it fail when you try and open it is a case of using exceptions for flow control. It's also inefficient in the case where the file doesn't exist, because exceptions are quite expensive in D (though the cost of the I/O may still outweigh the cost of the exception given how expensive I/O is).
It's generally considered bad practice to use exceptions in cases where the exception is likely to be thrown. In those cases, it's far better to return whether the operation succeeded or to check whether the operation is likely to succeed prior to attempting the operation. In the case of opening files, you'd likely do the latter. So, the cleanest way to do what you're trying to do would be to do something like
if(filename.exists)
{
auto file = File(filename);
...
}
or if you want to read the whole file in as a string in one go, you'd do
if(filename.exists)
{
auto fileContents = readText(filename);
...
}
exists and readText are in std.file, and File is in std.stdio.
If you're dealing with a case where it's highly likely that the file will exist and that therefore it's very unlikely that an exception will be thrown, then skipping the check and just trying to open the file is fine. But what you want to avoid is relying on the exception being thrown when it's not unlikely that the operation will fail. You want exceptions to be thrown rarely, so you check that operations will succeed before attempting them if it's likely that they will fail and throw an exception. Otherwise, you end up using exceptions for flow control and harm the efficiency (and maintainability) of your program.
And it's often the case that a file won't be there when you try and open it, so it's usually the case that you should check that a file exists before trying to open it (but it does ultimately depend on your particular use case).
I'd say you need to be prepared for an exception to be thrown anyway, otherwise you have a race condition (another process may delete the file between the test and the open etc). So it's best just to go ahead and open, then deal with the contingency.

Disabling Nagle's Algorithm under Action Script 3

I've been working with AS3 sockets, and I noticed that small packets are 'Nagled' when sent. I tried to find a way to set NoDelay for the socket, but I didn't find a clue even in the documentation. Is there another way to turn Nagle'a algorithm off in AS3 TCP sockets?
You can tell Flash to send out the data through the socket using the Flush method on the socket Object.
Flushes any accumulated data in the socket's output buffer.
That said, flash does what it thinks is the better, and may doesn't want to send your data too often. Still, that shouldn't be over few milliseconds.

Why is writing a closed TCP socket worse than reading one?

When you read a closed TCP socket you get a regular error, i.e. it either returns 0 indicating EOF or -1 and an error code in errno which can be printed with perror.
However, when you write a closed TCP socket the OS sends SIGPIPE to your app which will terminate the app if not caught.
Why is writing the closed TCP socket worse than reading it?
+1 To Greg Hewgill for leading my thought process in the correct direction to find the answer.
The real reason for SIGPIPE in both sockets and pipes is the filter idiom / pattern which applies to typical I/O in Unix systems.
Starting with pipes. Filter programs like grep typically write to STDOUT and read from STDIN, which may be redirected by the shell to a pipe. For example:
cat someVeryBigFile | grep foo | doSomeThingErrorProne
The shell when it forks and then exec's these programs probably uses the dup2 system call to redirect STDIN, STDOUT and STDERR to the appropriate pipes.
Since the filter program grep doesn't know and has no way of knowing that it's output has been redirected then the only way to tell it to stop writing to a broken pipe if doSomeThingErrorProne crashes is with a signal since return values of writes to STDOUT are rarely if ever checked.
The analog with sockets would be the inetd server taking the place of the shell.
As an example I assume you could turn grep into a network service which operates over TCP sockets. For example with inetd if you want to have a grep server on TCP port 8000 then add this to /etc/services:
grep 8000/tcp # grep server
Then add this to /etc/inetd.conf:
grep stream tcp nowait root /usr/bin/grep grep foo
Send SIGHUP to inetd and connect to port 8000 with telnet. This should cause inetd to fork, dup the socket onto STDIN, STDOUT and STDERR and then exec grep with foo as an argument. If you start typing lines into telnet grep will echo those lines which contain foo.
Now replace telnet with a program named ticker that for instance writes a stream of real time stock quotes to STDOUT and gets commands on STDIN. Someone telnets to port 8000 and types "start java" to get quotes for Sun Microsystems. Then they get up and go to lunch. telnet inexplicably crashes. If there was no SIGPIPE to send then ticker would keep sending quotes forever, never knowing that the process on the other end had crashed, and needlessly wasting system resources.
Usually if you're writing to a socket, you would expect the other end to be listening. This is sort of like a telephone call - if you're speaking, you wouldn't expect the other party to simply hang up the call.
If you're reading from a socket, then you're expecting the other end to either (a) send you something, or (b) close the socket. Situation (b) would happen if you've just sent something like a QUIT command to the other end.
Think of the socket as a big pipeline of data between the sending and the receiving process. Now imagine that the pipeline has a valve that is shut (the socket connection is closed).
If you're reading from the socket (trying to get something out of the pipe), there's no harm in trying to read something that isn't there; you just won't get any data out. In fact, you may, as you said, get an EOF, which is correct, as there's no more data to be read.
However, writing to this closed connection is another matter. Data won't go through, and you may wind up dropping some important communication on the floor. (You can't send water down a pipe with a closed valve; if you try, something will probably burst somewhere, or, at the very least, the back pressure will spray water all over the place.) That's why there's a more powerful tool to alert you to this condition, namely, the SIGPIPE signal.
You can always ignore or block the signal, but you do so at your own risk.
I think a large part of the answer is 'so that a socket behaves rather similarly to a classic Unix (anonymous) pipe'. Those also exhibit the same behaviour - witness the name of the signal.
So, then it is reasonable to ask why do pipes behave that way. Greg Hewgill's answer gives a summary of the situation.
Another way of looking at it is - what is the alternative? Should a 'read()' on a pipe with no writer give a SIGPIPE signal? The meaning of SIGPIPE would have to change from 'write on a pipe with noone to read it', of course, but that's trivial. There's no particular reason to think that it would be better; the EOF indication (zero bytes to read; zero bytes read) is a perfect description of the state of the pipe, and so the behaviour of read is good.
What about 'write()'? Well, an option would be to return the number of bytes written - zero. But that is not a good idea; it implies that the code should try again and maybe more bytes would be sent, which is not going to be the case. Another option would be an error - write() returns -1 and sets an appropriate errno. It isn't clear that there is one. EINVAL or EBADF are both inaccurate: the file descriptor is correct and open at this end (and should be closed after the failing write); there just isn't anything to read it. EPIPE means 'broken PIPE'; so, with a caveat about "this is a socket, not a pipe", it would be the appropriate error. It is probably the errno returned if you ignore SIGPIPE. It would be feasible to do this - just return an appropriate error when the pipe is broken (and never send the signal). However, it is an empirical fact that many programs do not pay as much attention to where their output is going, and if you pipe a command that will read a multi-gigabyte file into a process that quits after the first 20 KB, but it is not paying attention to the status of its writes, then it will take a long time to finish, and will be wasting machine effort while doing so, whereas by sending it a signal that it is not ignoring, it will stop quickly -- this is definitely advantageous. And you can get the error if you want it. So the signal sending has benefits to the o/s in the context of pipes; and sockets emulate pipes rather closely.
Interesting aside: while checking the message for SIGPIPE, I found the socket option:
#define SO_NOSIGPIPE 0x1022 /* APPLE: No SIGPIPE on EPIPE */