Must a restart handler be signal-ed from within another handler, or can it be signal-ed directly by the code where the exceptional condition was detected?
If it must be signal-ed from within a handler, why is it so? it seems like a needless extra step.
What is the added value of a restart handler as opposed to a regular handler; if we dispensed with restart handlers altogether (but not with regular handlers)? Would it make any difference as to the power or expressibility of the language?
The following answer should be taken with a grain of salt. It is based on my understanding of the section "Conditions" in "The Dylan Reference Manual", however I have never written a single line of Dylan code and have not even read much more of the reference manual than said section.
Must a restart handler be signal-ed from within another handler, or can it be signal-ed directly by the code where the exceptional condition was detected?
A restart is a condition, as shown in Figure 11-6 of the reference manual. It can be signal-ed whenever a signal statement is syntactically valid. There is no special mechanism for installing handlers for restart conditions as opposed to handlers for non-restart conditions (in contrast to such languages as Common Lisp and R).
The only difference in signaling a restart as opposed to signaling a non-restart condition is that if a restart handler is signal-ed from within another handler, the rest of the handler code that lexically follows the signal-ing will not be executed even if the restart handler returns. In this case, the execution of the handler that signal-ed the restart, as well as the execution of the handlers that invoked this handler, stops and the values returned by the restart handler become the values returned by each of these handlers. ("If the restart handler returns some values, signal returns those values and the handler that called signal also returns them. The call to signal from the signaling unit that signaled the original condition returns the same values, and the signaling unit recovers as directed by those values." Restarts/The Dylan Reference Manual).
It is not clear to me what happens if a restart handler performs a non-local exit targeting a location inside the unit where the restart handler was signal-ed.
What is the added value of a restart handler as opposed to a regular handler; if we dispensed with restart handlers altogether (but not with regular handlers)? Would it make any difference as to the power or expressibility of the language?
The restart mechanism is, in effect, a switch statements, whose selection condition is determined dynamically by code external to the function where the switch statement is defined. A restart signal can be emulated by a non-restart condition, but the restart mechanism provides two formal facilities that otherwise would have to be established by convention in order to achieve similar functionality:
When a restart handler returns after having been signal-ed from within another handler, the rest of the code in the other handler is automatically skipped and the handler returns the values returned by the restart handler.
A restart condition can be formally identified by its type. If there weren't a restart type, some other convention would need to be followed if it were desirable for restart conditions to be identifiable, e.g. for the purposes of listing them in the debugger's Recovery menu ("An interactive debugger ought to offer the user the ability to signal any restart for which a restart handler is applicable and to return if the condition's recovery protocol allows it. This could, for example, be done with a menu titled "Recovery." " Recovery Protocols/The Dylan Reference Manual)
Related
I'm trying to understand how the partitions are executing the events when there is retry policy in place for the event hub and I can't find an answer to what happens to new events when one got an error and is retrying in the same partition in the event hub?
I'm guessing that the one that got an error shouldn't block new ones from executing and when it reties it should be put at the end of the partition, so any other events that got in the partition after that event got an error should be executed in order without any blockage.
Can someone explain what is actually happening in a scenario like that?
Thanks.
It's difficult to answer precisely without some understanding of the application context. The below assumes the current generation of the Azure SDK for .NET, though conceptually the answer will be similar for others.
Retries during publishing are performed within the client, which treats each publishing operation an independent and isolated. When your application calls SendAsync, the client will attempt to publish them and will apply its retry policy in the scope of that call. When the SendAsync call completes, you'll have a deterministic answer of whether the call succeeded or failed.
If the SendAsync call throws, the retry policy has already been applied and either the exception was fatal or all retries were exhausted. The operation is complete and the client is no longer trying to publish those events.
If your application makes a single SendAsync call then, in the majority of cases, it will understand the outcome of the publishing operation and the order of events is preserved. If your application is calling SendAsync concurrently, then it is possible that events will arrive out of order - either due to network latency or retries.
While the majority of the time, the outcome of a call is fully deterministic, some corner cases do exist. For example, if the SendAsync call encounters a timeout, it is ambiguous whether or not the service received the events. The client will retry, which may produce duplicates. If your application sees a TimeoutException surface, then it cannot be sure whether or not the events were successfully published.
Here are a few questions in regard with event handling:
Question 1
Principally, may event handler (UI or not) methods execute for a relatively long time?
Question 2
If event handling may anyway take a lot of time in a given system, then this handling must, probably, be asynchronously performed, in order to avoid blocking the s/w. In this case, shall the class publishing this event asynchronously call all the registered handlers? Or may be it is better for this class to avoid any such assumptions, and have each handler, that takes a long time to execute, perform its massive work asynchronously, and immediately return without blocking?
Question 3
Anyway, when an event handler method is asynchronously called using BeginInvoke by the class publishing this event, is it a must to call the corresponding EndInvoke, and even take in account the possibility of an exception? Or may be it is better for the class raising the event to ignore them?
Answer to question 1
See jgauffin.
Basically, event handlers must perform in a relatively short time in order not to block the execution of the rest of the registered handlers, nor the handling of the following events. Needless to say, this demand becomes crucial when dealing with events whose handling holds the UI unresponsive!
Answer to question 2
Assuming that a class publishing an event isn't familiar with all possible event handlers (especially consider an event published by some DLL exported class), the class must call all the registered handlers asynchronously. But this has its cost (thread switching, caching, concurrency, synchronization, &c: see, for example, Brannon, Manoj Sharma, or even this wiki), which shouldn't be paid unless must be. Therefore the best alternatively is letting each registered handler, which, of course, knows how much time-consuming its own work is, decide whether it shall execute its own work synchronously or asynchronously. And here's a nice idea: an event may use its event-arguments structure to publish the time allocated for each handler (this time is basically calculated by dividing the overall event handling time permitted by the number of currently registered listeners), letting each handler use this info to decide whether to execute its own work synchronously or asynchronously.
Answer to question 3
See MSDN (note "Important!"), Hans Passant (a prominent world-class .NET expert), Bruno Brant, and eventually STW, which also supplies demo code; all of them seem to be strongly in favor of calling EndInvoke and catching possible exceptions. (Indeed, some programmers tend to avoid using exceptions, but exceptions are inherent with C++, Java, and .NET methods, and even more so with Python functions. If big-data s/w like Hadoop and the applications above it intensively use exceptions, then anyone may.)
Postscript
Eventually, after realizing that no one is going to reply my question, I turned to Microsoft's social MSDN, where I immediately received an answer from Magnus (also a prominent world-class .NET expert), which basically agreed with above arguments; see our correspondence here.
I would like to have my code create a dump for unhandled exceptions.
I'd thought of using the SetUnhandledExceptionFilter. But what are the cases when SetUnhandledExceptionFilter may not work as expected. For example what about stack corruption issues when, for instance, a buffer overrun occurs on stack?
what will happen in this case? are there any additional solutions which will always work?
I've been using SetUnhandledExceptionFilter for quite a while and have not noticed any crashes/problems that are not trapped correctly. And, if an exception is not handled somewhere in the code, it should get handled by the filter. From MSDN regarding the filter...
After calling this function, if an exception occurs in a process that
is not being debugged, and the exception makes it to the unhandled
exception filter, that filter will call the exception filter function
specified by the lpTopLevelExceptionFilter parameter.
There is no mention that the above applies to only certain types of exceptions.
I don't use the filter to create a dump file because the application uses the Microsoft WER system to report crashes. Rather, the filter is used to provide an opportunity to collect two additional files to attach to the crash report (and dump file) that Microsoft will collect.
Here's an example of Microsoft's crash report dashboard for the application with module names redacted.
You'll see that there's a wide range of crash types collected, including, stack buffer overrun.
Also make sure no other code calls the SetUnhandledExceptionFilter() after you set it to your handler.
I had a similar issue and in my case it was caused by another linked library (ImageMagick) which called SetUnhandledExceptionFilter() from its Magick::InitializeMagick() which was called just in some situations in our application. Then it replaced our handler with ImageMagick's handler.
I found it by setting a breakpoint on SetUnhandledExceptionFilter() in gdb and checked the backtrace.
Background: I have a legacy project which I've been tasked with debugging and revising. The budget is very low and doesn't allow for significant refactoring. I am slowly working my way into the code and coming across many possible issues and/or inefficiencies.
Question: What are the consequences of adding the same eventListener multiple times? Does it overwrite the existing eventListener? Is this just a matter of inefficiency?
There is a routine which is called very frequently which adds eventListeners. I put in a trace statement to confirm the eventListener redundancy.
trace("*** already has eventListener", tempEventButton.hasEventListener("eventClicked"));
According to the official Actionscript 3.0 reference on the Event Dispatcher class:
Keep in mind that after the listener is registered, subsequent calls
to addEventListener() with a different type or useCapture value result
in the creation of a separate listener registration.
This implies that, providing you do not register the same listener (the function passed to addEventListener) with a different type or useCapture, a new eventListener wouldn't be created.
The question is whether or not this is a more expensive check for Event Dispatcher to perform than performing the check to ensure the addEventListener() for this event is only called once. Rather than call hasEventListener() to perform this check, you could flag that the listener has already been added in a Boolean, which wouldn't be a very expensive check to perform.
Which of the following ways of handling this precondition is more desirable and what are the greater implications?
1:
If Not Exists(File) Then
ThrowException
Exit
End If
File.Open
...work on file...
2:
If Exists(File) Then
File.Open
....work on file...
Else
ThrowException
Exit
End
Note: The File existence check is just an example of a precondition to HANDLE. Clearly, there is a good case for letting File existence checks throw their own exceptions upwards.
I prefer the first variant so it better documents that there are preconditions
Separating the pre-condition check from work is only valid if nothing can change between the two. In this case an external event could delete the file before you open it. Hence the check for file existence has little value, the open call has to check this anyway, let it produce the exception.
It's a style thing. Both work well however I prefer option 1. I like to exit my method as soon as I can and have all the checks up front.
Readability of first approach is higher than the second one.
Second option can nest quite fast if you have several preconditions to check; moreover, it suggests that the if/else is somehow in the normal flow, while it is really only for exceptional situations.
As well, expressiveness of first approach is therefore higher than the second one.
As we are talking about preconditions, they should be checked in the beginning of the procedure, just to ensure the contract is being respected; for this reason, the entire check should be somehow separated from the remaining part of the procedure.
For these two reasons, I would definitely go for the first option.
Note: I am talking here about preconditions: I expect that the contract of your function explicitly defines the file as existing, and therefore not having it would be a sign of programming error.
Otherwise, if we are simply talking about exception handling, I would simply leave it to the File.Open, handling that exception only if there is some idea on how to proceed with that.
Every exception must be produced at the appropriate level. In this case, your exception is an open() issue, which is handled by the open() call. Therefore, you should not add exception code to your routine, because you would duplicate stuff. This holds unless:
you want to abstract your IO backend (say your high level routine can either use file open, but also MySQL in the future). In this case, it would be better for client codes to know a more standard and unique exception will be produced if IO issues arise
the presence of a low level exception implies a higher level exception with high level semantic (for example, not being able to open a password file means that no auth is possible and therefore you should raise something like UnableToAuthenticateException)
As for coding style of your two cases, I would definitely go for the first. I hate long blocks of code, in particular under ifs. They also tend to nest and if you choose the second strategy, you will end up indenting too much.
A true precondition is something which, if happens, is a bug in the caller situation: you design a function under certain conditions but they are not hold, so the caller should never have called the function with these data.
Your case of not finding a file could be like this, if the file is required and its existence is checked before in another part of the code; however, this is not quite so, as djna says: file deletion or network failure could cause an error to happen right when you open the file.
The most common treatment is then to try to open the file, and throw an exception on failure. Then, assuming that an exception hasn't been thrown, continue with normal work.