Panic recover in Go v.s. try catch in other languages - exception

I've just read this post about Panic/Recover in Go and I'm not clear on how this differs from try/catch in other mainstream languages.

Panic/Recover are function scoped. It's like saying that you're only allowed one try/catch block in each function and the try has to cover the whole function. This makes it really annoying to use Panic/Recover in the same way that java/python/c# etc. use exceptions. This is intentional.
This also encourages people to use Panic/Recover in the way that it was designed to be used. You're supposed to recover() from a panic() and then return an error value to the caller.

I keep looking at this question trying to think of the best way to answer it. It's easiest to just point to the idiomatic uses for panic/recover as opposed to try/catch &| exceptions in other languages, or the concepts behind those idioms (which can be basically summed up as "exceptions should only occur in truly exceptional circumstances")
But as to what the actual difference is between them? I'll try to summarize as best I can.
One of the main differences compared to try/catch blocks is the way control flows. In a typical try/catch scenario, the code after the catch block will run unless it propagates the error. This is not so with panic/recover. A panic aborts the current function and begins to unwind the stack, running deferred functions (the only place recover does anything) as it encounters them.
Really I'd take that even further: panic/recover is almost nothing like try/catch in the sense that try and catch are (or at least act like) control structures, and panic/recover are not.
This really stems out of the fact that recover is built around the defer mechanism, which (as far as I can tell) is a fairly unique concept in Go.
There are certainly more, which I'll add if I can actuate my thoughts a bit better.

I think we all agree that panic is throw, recover is catch, and defer is finally.
The big difference seems that recover goes inside defer. Going back to traditional terms, it lets you decide exactly at which point of your finally you want to bother catching anything, or not at all.

defer is a mechanism not only for handling errors but also for doing a comfortable and controlled cleanup. Now panic works like raise() in other languages. With the help of the function recover() you've got the chance to catch this panic while it goes up the call stack. This way it's almost similar to try/catch. But while the latter works on blocks panic/recover works on function level.
Rob Pike about the reason for this solution: "We don't want to encourage the conflation of errors and exceptions that occur in languages such as Java.". Instead of having a large number of different exceptions with an even larger number of usages one should do everything to avoid runtime errors, deliver proper error return values after determination and use panic/recover only if there's no other way.

I think the Panic is the same as throw,Recover is the same as catch. The difference is at defer. At the begin, I think defer is the same as finally,but later, I find defer is more flexible than finally. defer can be place at any scope of your function and remember the value of parameter at that moment and also can change the returned return value,the the panic can be at any where after defer. but because the missing of block of try, we can't process the "exception" unless the whole function returned. I don't think this is a disadvantage,maybe GO want to make your method only do one thing, any exception should make this thing can't go on.
and since panic must after defer, it make you must process its "exception" before use it.
this is just understanding of myself .

"go" is unlike other languages a concurrent language, meaning that parts of the program runs in parallel. This means that if the designers want to have a similar mechanism as catch/throw , the mechanism has to meticulously redefined in this context. That explains the differences between the two mechanisms.

Related

what can't be done with if else clause, and can be done with exception handling?

what can't be done with if else clause, and can be done with exception handling ?
In other words where do we actually need to use exception handling and mere if else won't serve the purpose.
Is exception handling just a glorified way of showing errors ?
In the earliest C++ implementations, exceptions were compiled into equivalent if/else constructs, more or less. And they were slow as molasses, to the point where you can still find programming guides that recommend against exceptions on the grounds that "they are slow".
So it is not a question of "can" or "cannot". It is a question of readability and performance.
Littering every line of your code with error checks makes it harder for a human to follow the non-exceptional (i.e. common) case. And as every good programmer knows, the primary audience for your code is human readers.
As for performance, if/else is slower than a modern exception implementation. Modern implementations incur literally zero overhead except when an exception is actually thrown. If your exceptions truly represent "exceptional" cases, this can be a significant performance difference.
Some languages (for example C++) don't allow return values in certain cases. For example, in C++ constructors don't have return values (even void), so the only way to signal an error in a constructor is to throw an exception.
Look, there isn't anything you can do with exceptions you can't do with hand-coded 8086 assembler. (Turing complete!) Except that hand-coded assembler isn't a very good tool for a 100K lines-of-code project unless you are the best coder in the Milky Way, if then. Experience shows a number of situations where the idiom of exception handling yields the most robust code written by ordinary humans. Sharptooth's example of ctors is good. So are errors that need to propagate up the call stack. Edit: And how many people actually check that malloc didn't fail every single time? Better to throw an OutOfMemoryException.
Exceptions are just what they are named for: Exceptional events or situations which break the current flow.
You should use exceptions to indicate that something nasty and very exceptional has happened. Let's say you have a class which reads your configuration file. Exceptional situations could be:
The file cannot be found.
The file exists, but cannot be read.
The file was deleted in the middle of
a read operation.
You could handle all of these with if-else blocks but it is much simpler to do with exceptions.
Of course, you can write code that does not use exceptions. However, if you do, you would have make sure that any function that could issue an error is handled properly.
For me, the biggest advantage of having exceptions is that I can write straight code that simply assumes that all functions succeeds, knowing that upper layers will take care of reporting errors.
Concretely, take the following fictitious function to work with text in an editor:
do-stuff:
backward-line 10
x = point
search-for "FOO"
return buffer-substring x point
Both "backward-line" and "search-for" could fail. If I would have to handle errors myself, I would have to check them. In addition, I would have to invent a side-channel to report to my caller that an error occurred. As I said, it can be done, but it would be a lot messier.
Consider a following real-life example. You have a complex recursive function and need to get out of all the calls currently in progress on some condition.
void complexFunction()
{
//do stuff, then
if( timeToBailOut() ) {
// what? how would you get out of all instances at once?
}
}
You would need to write those if statements very carefully, perhaps introduce a special return value, check for that value at all times, that would complicate you code greatly. You can easily achieve that behavior with exceptions.
if( timeToBailOut() ) {
throw BailOutException();
}
There is nothing that can be done with exception handling that can't be done without it, however, some situations would be a pain without exception handling.
Example: a() calls b() and b() calls c(). An error might occur in c() that has to be handled in a(). Without exception handling you would have to deal with it by using sentinel return values in b() and c(), and extra checking code in b(). You can imagine how much more code you would need to write if the error has to be passed through more levels from what caused it to what needs to handle it.
This is a short version of my answer to a different question that was asked here earlier and migrated.

Why should I not wrap every block in "try"-"catch"?

I have always been of the belief that if a method can throw an exception then it is reckless not to protect this call with a meaningful try block.
I just posted 'You should ALWAYS wrap calls that can throw in try, catch blocks.' to this question and was told that it was 'remarkably bad advice' - I'd like to understand why.
A method should only catch an exception when it can handle it in some sensible way.
Otherwise, pass it on up, in the hope that a method higher up the call stack can make sense of it.
As others have noted, it is good practice to have an unhandled exception handler (with logging) at the highest level of the call stack to ensure that any fatal errors are logged.
As Mitch and others stated, you shouldn't catch an exception that you do not plan on handling in some way. You should consider how the application is going to systematically handle exceptions when you are designing it. This usually leads to having layers of error handling based on the abstractions - for example, you handle all SQL-related errors in your data access code so that the part of the application that is interacting with domain objects is not exposed to the fact that there is a DB under the hood somewhere.
There are a few related code smells that you definitely want to avoid in addition to the "catch everything everywhere" smell.
"catch, log, rethrow": if you want scoped based logging, then write a class that emits a log statement in its destructor when the stack is unrolling due to an exception (ala std::uncaught_exception()). All that you need to do is declare a logging instance in the scope that you are interested in and, voila, you have logging and no unnecessary try/catch logic.
"catch, throw translated": this usually points to an abstraction problem. Unless you are implementing a federated solution where you are translating several specific exceptions into one more generic one, you probably have an unnecessary layer of abstraction... and don't say that "I might need it tomorrow".
"catch, cleanup, rethrow": this is one of my pet-peeves. If you see a lot of this, then you should apply Resource Acquisition is Initialization techniques and place the cleanup portion in the destructor of a janitor object instance.
I consider code that is littered with try/catch blocks to be a good target for code review and refactoring. It indicates that either exception handling is not well understood or the code has become an amœba and is in serious need of refactoring.
Because the next question is "I've caught an exception, what do I do next?" What will you do? If you do nothing - that's error hiding and the program could "just not work" without any chance to find what happened. You need to understand what exactly you will do once you've caught the exception and only catch if you know.
You don't need to cover every block with try-catches because a try-catch can still catch unhandled exceptions thrown in functions further down the call stack. So rather than have every function have a try-catch, you can have one at the top level logic of your application. For example, there might be a SaveDocument() top-level routine, which calls many methods which call other methods etc. These sub-methods don't need their own try-catches, because if they throw, it's still caught by SaveDocument()'s catch.
This is nice for three reasons: it's handy because you have one single place to report an error: the SaveDocument() catch block(s). There's no need to repeat this throughout all the sub-methods, and it's what you want anyway: one single place to give the user a useful diagnostic about something that went wrong.
Two, the save is cancelled whenever an exception is thrown. With every sub-method try-catching, if an exception is thrown, you get in to that method's catch block, execution leaves the function, and it carries on through SaveDocument(). If something's already gone wrong you likely want to stop right there.
Three, all your sub-methods can assume every call succeeds. If a call failed, execution will jump to the catch block and the subsequent code is never executed. This can make your code much cleaner. For example, here's with error codes:
int ret = SaveFirstSection();
if (ret == FAILED)
{
/* some diagnostic */
return;
}
ret = SaveSecondSection();
if (ret == FAILED)
{
/* some diagnostic */
return;
}
ret = SaveThirdSection();
if (ret == FAILED)
{
/* some diagnostic */
return;
}
Here's how that might be written with exceptions:
// these throw if failed, caught in SaveDocument's catch
SaveFirstSection();
SaveSecondSection();
SaveThirdSection();
Now it's much clearer what is happening.
Note exception safe code can be trickier to write in other ways: you don't want to leak any memory if an exception is thrown. Make sure you know about RAII, STL containers, smart pointers, and other objects which free their resources in destructors, since objects are always destructed before exceptions.
Herb Sutter wrote about this problem here. For sure worth reading.
A teaser:
"Writing exception-safe code is fundamentally about writing 'try' and 'catch' in the correct places." Discuss.
Put bluntly, that statement reflects a fundamental misunderstanding of exception safety. Exceptions are just another form of error reporting, and we certainly know that writing error-safe code is not just about where to check return codes and handle error conditions.
Actually, it turns out that exception safety is rarely about writing 'try' and 'catch' -- and the more rarely the better. Also, never forget that exception safety affects a piece of code's design; it is never just an afterthought that can be retrofitted with a few extra catch statements as if for seasoning.
As stated in other answers, you should only catch an exception if you can do some sort of sensible error handling for it.
For example, in the question that spawned your question, the questioner asks whether it is safe to ignore exceptions for a lexical_cast from an integer to a string. Such a cast should never fail. If it did fail, something has gone terribly wrong in the program. What could you possibly do to recover in that situation? It's probably best to just let the program die, as it is in a state that can't be trusted. So not handling the exception may be the safest thing to do.
If you always handle exceptions immediately in the caller of a method that can throw an exception, then exceptions become useless, and you'd better use error codes.
The whole point of exceptions is that they need not be handled in every method in the call chain.
The best advice I've heard is that you should only ever catch exceptions at points where you can sensibly do something about the exceptional condition, and that "catch, log and release" is not a good strategy (if occasionally unavoidable in libraries).
I was given the "opportunity" to salvage several projects and executives replaced the entire dev team because the app had too many errors and the users were tired of the problems and run-around. These code bases all had centralized error handling at the app level like the top voted answer describes. If that answer is the best practice why didn't it work and allow the previous dev team to resolve issues? Perhaps sometimes it doesn't work? The answers above don't mention how long devs spend fixing single issues. If time to resolve issues is the key metric, instrumenting code with try..catch blocks is a better practice.
How did my team fix the problems without significantly changing the UI? Simple, every method was instrumented with try..catch blocked and everything was logged at the point of failure with the method name, method parameters values concatenated into a string passed in along with the error message, the error message, app name, date, and version. With this information developers can run analytics on the errors to identify the exception that occurs the most! Or the namespace with the highest number of errors. It can also validate that an error that occurs in a module is properly handled and not caused by multiple reasons.
Another pro benefit of this is developers can set one break-point in the error logging method and with one break-point and a single click of the "step out" debug button, they are in the method that failed with full access to the actual objects at the point of failure, conveniently available in the immediate window. It makes it very easy to debug and allows dragging execution back to the start of the method to duplicate the problem to find the exact line. Does centralized exception handling allow a developer to replicate an exception in 30 seconds? No.
The statement "A method should only catch an exception when it can handle it in some sensible way." This implies that developers can predict or will encounter every error that can happen prior to release. If this were true a top level, app exception handler wouldn't be needed and there would be no market for Elastic Search and logstash.
This approach also lets devs find and fix intermittent issues in production! Would you like to debug without a debugger in production? Or would you rather take calls and get emails from upset users? This allows you to fix issues before anyone else knows and without having to email, IM, or Slack with support as everything needed to fix the issue is right there. 95% of issues never need to be reproduced.
To work properly it needs to be combined with centralized logging that can capture the namespace/module, class name, method, inputs, and error message and store in a database so it can be aggregated to highlight which method fails the most so it can be fixed first.
Sometimes developers choose to throw exceptions up the stack from a catch block but this approach is 100 times slower than normal code that doesn't throw. Catch and release with logging is preferred.
This technique was used to quickly stabilize an app that failed every hour for most users in a Fortune 500 company developed by 12 Devs over 2 years. Using this 3000 different exceptions were identified, fixed, tested, and deployed in 4 months. This averages out to a fix every 15 minutes on average for 4 months.
I agree that it is not fun to type in everything needed to instrument the code and I prefer to not look at the repetitive code, but adding 4 lines of code to each method is worth it in the long run.
I agree with the basic direction of your question to handle as many exceptions as possible at the lowest level.
Some of the existing answer go like "You don't need to handle the exception. Someone else will do it up the stack." To my experience that is a bad excuse to not think about exception handling at the currently developed piece of code, making the exception handling the problem of someone else or later.
That problem grows dramatically in distributed development, where you may need to call a method implemented by a co-worker. And then you have to inspect a nested chain of method calls to find out why he/she is throwing some exception at you, which could have been handled much easier at the deepest nested method.
The advice my computer science professor gave me once was: "Use Try and Catch blocks only when it's not possible to handle the error using standard means."
As an example, he told us that if a program ran into some serious issue in a place where it's not possible to do something like:
int f()
{
// Do stuff
if (condition == false)
return -1;
return 0;
}
int condition = f();
if (f != 0)
{
// handle error
}
Then you should be using try, catch blocks. While you can use exceptions to handle this, it's generally not recommended because exceptions are expensive performance wise.
If you want to test the outcome of every function, use return codes.
The purpose of Exceptions is so that you can test outcomes LESS often. The idea is to separate exceptional (unusual, rarer) conditions out of your more ordinary code. This keeps the ordinary code cleaner and simpler - but still able to handle those exceptional conditions.
In well-designed code deeper functions might throw and higher functions might catch. But the key is that many functions "in between" will be free from the burden of handling exceptional conditions at all. They only have to be "exception safe", which does not mean they must catch.
I would like to add to this discussion that, since C++11, it does make a lot of sense, as long as every catch block rethrows the exception up until the point it can/should be handled. This way a backtrace can be generated. I therefore believe the previous opinions are in part outdated.
Use std::nested_exception and std::throw_with_nested
It is described on StackOverflow here and here how to achieve this.
Since you can do this with any derived exception class, you can add a lot of information to such a backtrace!
You may also take a look at my MWE on GitHub, where a backtrace would look something like this:
Library API: Exception caught in function 'api_function'
Backtrace:
~/Git/mwe-cpp-exception/src/detail/Library.cpp:17 : library_function failed
~/Git/mwe-cpp-exception/src/detail/Library.cpp:13 : could not open file "nonexistent.txt"
I feel compelled to add another answer although Mike Wheat's answer sums up the main points pretty well. I think of it like this. When you have methods that do multiple things you are multiplying the complexity, not adding it.
In other words, a method that is wrapped in a try catch has two possible outcomes. You have the non-exception outcome and the exception outcome. When you're dealing with a lot of methods this exponentially blows up beyond comprehension.
Exponentially because if each method branches in two different ways then every time you call another method you're squaring the previous number of potential outcomes. By the time you've called five methods you are up to 256 possible outcomes at a minimum. Compare this to not doing a try/catch in every single method and you only have one path to follow.
That's basically how I look at it. You might be tempted to argue that any type of branching does the same thing but try/catches are a special case because the state of the application basically becomes undefined.
So in short, try/catches make the code a lot harder to comprehend.
Besides the above advice, personally I use some try+catch+throw; for the following reason:
At boundary of different coder, I use try + catch + throw in the code written by myself, before the exception being thrown to the caller which is written by others, this gives me a chance to know some error condition occured in my code, and this place is much closer to the code which initially throw the exception, the closer, the easier to find the reason.
At the boundary of modules, although different module may be written my same person.
Learning + Debug purpose, in this case I use catch(...) in C++ and catch(Exception ex) in C#, for C++, the standard library does not throw too many exception, so this case is rare in C++. But common place in C#, C# has a huge library and an mature exception hierarchy, the C# library code throw tons of exception, in theory I(and you) should know every exceptions from the function you called, and know the reason/case why these exception being thrown, and know how to handle them(pass by or catch and handle it in-place)gracefully. Unfortunately in reality it's very hard to know everything about the potential exceptions before I write one line of code. So I catch all and let my code speak aloud by logging(in product environment)/assert dialog(in development environment) when any exception really occurs. By this way I add exception handling code progressively. I know it conflit with good advice but in reality it works for me and I don't know any better way for this problem.
You have no need to cover up every part of your code inside try-catch. The main use of the try-catch block is to error handling and got bugs/exceptions in your program. Some usage of try-catch -
You can use this block where you want to handle an exception or simply you can say that the block of written code may throw an exception.
If you want to dispose your objects immediately after their use, You can use try-catch block.

F# exception handling constructs

Why doesn't F# naturally support a try/with/finally block?
Doesn't it make sense to try something, deal with whatever exception it throws, at least to log the exception, and then be sure that some code executes after all that?
Sure, we can do
try
try
...
with ex -> ...
finally
...
But that seems too artificial, it clearly demonstrates that "F# is against try/with/finally". Why is that?
As somebody already mentioned, you would usually use try-with-finally to make sure that you properly release all resources in case of an exception. I think in most of the cases you can do this more easily using the use keyword:
let input =
try
use stream = new FileStream("C:\temp\test.txt");
use rdr = new StreamReader(stream);
Some(rdr.ReadToEnd())
with :? IOException as e ->
logError(e)
None
I think this is mostly the reason why you don't need try-with-finally as often as you would in other languages. But of course, there are some situations where you may need it (but you could of course avoid that by creating instance of IDisposable using object expressions (which is syntactically very easy). But I think this is so rare that the F# team doesn't really need to worry about this.
Orthogonality? You can simply nest a try-with inside a try-finally, as you show. (This is what happens at the IL level anyway, I think.)
That said, try-with-finally is something that we may consider in a future version of the language.
Personally I have only run into wanting it a couple times, but when you do need it, it is a little bothersome to have to do the extra nesting/indent. In general I find that I rarely write exception handling code, and it's usually just one or the other (e.g. a finally to restore an invariant or other transactional semantics, or a 'catch' near the top of an app to log an exception or show the user diagnostics).
But I don't think there's a lot to 'read in to' regarding the language design here.
Without going into great detail because the great details are in
[Expert .NET 2.0 IL Assembler] 01 by Serge Lidin
See: Ch. 14, Managed Exception Handling
"The finally and fault handlers cannot peacefully coexist with other handlers, so if a guarded block has a finally or fault handler, it cannot have anything else. To combine a finally or fault handler with other handlers, you need to nest the guarded and handler bocks within other guarded blocks, ..., so that each finally or fault handler has its own personal guarded block."
pg. 300
A try/catch uses a fault handler and a try/finally uses a finally handler.
See: ILGenerator.BeginFaultBlock Method
If you emit a fault handler in an exception block that also contains a
catch handler or a finally handler, the resulting code is
unverifiable.
So, all of the .net languages could be consider to be using syntactic sugar and since F# is so new, they just haven't implemented it yet. No harm no foul.
But that seems too artificial, it clearly demonstrates that "F# is against try/with/finally". Why is that?
I guess F# might be "against" exception-handling at all. For the sake of .NET interoperability, it has to support them, but basically, there is no exception-handling* in functional programming.
Throwing/Catching exceptions means performing "jumps to nowhere" that aren't even noticed by the type system which is both fundamentally against the functional philosophy.
You can use purely functional (monadic) code to wrap exceptions. All errors are handled through values, in terms of the underlying type system and free of jumps/side effects.
Instead of writing a function
let readNumber() : int = ...
that may throw arbitrary exceptions, you'll simply state
let readNumber() : int option = ...
which makes this point automatically clear by its type signature.
*This doesn't mean we don't handle exceptional situations, it's just about the kind of exception-handling in .NET/C++.
I will clarify my comment in this answer.
I maintain there is no reason to assume that you want to catch exceptions and finalize some resources at the same level. Perhaps you got used to do it that way in a language in which it was convenient to handle both at the same time, but it's a coincidence when it happens. The finalization is convenient when you do not catch all exceptions from the inner block. try...with is for catching exceptions so that the computation can continue normally. There simply is no relationship between the two (if anything, they go in opposite directions: are you catching the exception, or letting it go through?).
Why do you have to finalize anything at all? Shouldn't the GC be managing unreferenced resources for you? Ah... but the language is trying to give you access to system primitives which work with side-effects, with explicit allocations and de-allocations. You have to de-allocate what you have allocated (in all cases)... Shouldn't you be blaming the rotten interface that the system is providing instead of F#, which is only the messenger in this case?

Does wrapping everything in try/catch blocks constitute defensive programming?

I have been programming for the last 3 years. When I program, I use to handle all known exceptions and alert the user gracefully. I have seen some code recently which has almost all methods wrapped inside try/catch blocks. The author says it is part of defensive programming. I wonder, is this really defensive programming? Do you recommend putting all your code in try blocks?
My basic rule is : Unless you can fix the problem which caused the exception, do not catch it, let it bubble up to a level where it can be dealt with.
In my experience, 95% of all catch blocks either just ignore the exception (catch {}) or merely log the error and rethrow the exception. The latter might seem like the right thing to do, but in practice, when this is done at every level, you merely end up with your log cluttered with five copies of the same error message. Usually these apps have an "ignore catch" at the top most level (since "we have try/catch at all the lower levels"), resulting in a very slow app with lots of missed exceptions, and an error log that too long for anyone to be willing to look through it.
Extensive use of Try...Catch isn't defensive programming, it's just nailing the corpse in an upright position.
Try...Finally can be used extensively for recovery in the face of unexpected exceptions. Only if you expect an exception and now how to deal with it should you use Try..Catch instead.
Sometimes I see Try..Catch System.Exception, where the catch block just logs the exception and re-throws. There are at least 3 problems with that approach:
Rethrow assumes an unhandled exception, and therefore that the program should terminate because it's in an unknown state. But catch causes the Finally blocks below the Catch block to run. In an undefined situation, the code in these Finally blocks could make the problem worse.
The code in those Finally blocks will change program state. So any logging won't capture the actual program state when the exception was originally thrown. And investigation will be more difficult because the state has changed.
It gives you a miserable debugging experience, because the debugger stops on the rethrow, not the original throw.
No, it's not "defensive programming." Your coworker is trying to rationalize his bad habit by employing a buzzword for a good habit.
What he's doing should be called "sweeping it under the rug." It's like uniformly (void)-ing the error-status return value from method calls.
The term "defensive programming" stands for writing code in such a way that it can recover from error situations or that it avoid the error situation altogether. For example:
private String name;
public void setName(String name) {
}
How do you handle name == null? Do you throw an exception or do you accept it? If it doesn't make sense to have an object without a name, then you should throw an exception. What about name == ""?
But ... later you write an editor. While you set up the UI, you find that there are situations where a user can decide to take the name away or the name can become empty while the user edits it.
Another example:
public boolean isXXX (String s) {
}
Here, the defensive strategy is often to return false when s == null (avoid NPEs when you can).
Or:
public String getName() {
}
A defensive programmer might return "" if name == null to avoid NPEs in calling code.
If you're going to handle random exceptions, handle them in only one place - the very top of the application, for the purposes of:
presenting a friendly message to the user, and
saving the diagnostics.
For everything else, you want the most immediate, location-specific crash possible, so that you catch these things as early as possible - otherwise exception handling becomes a way of hiding sloppy design and code.
In most cases where the exception is predictable, it's possible to test ahead of time, for the condition that the exception handler will catch.
In general, If...Else is much better than Try...Catch.
Catching random exceptions is bad. What then?
Ignore them? Excellent. Let me know how that works for them.
Log them and keep running? I think not.
Throw a different exception as part of crashing? Good luck debugging that.
Catching exceptions for which you can actually do something meaningful is good. These cases are easy to identify and maintain.
Can I just say as an aside here that every time one of my co-workers writes a method signature with "throws Exception" instead of listing the types of exceptions the method really throws, I want to go over and shoot them in the head? The problem is that after a while you've got 14 levels of calls all of which say "throws Exception", so refactoring to make them declare what they really throw is a major exercise.
There is such a thing as "too much" processing, and catching all exceptions kindof defeats the point. Specifically for C++, the catch(...) statement does catch all exceptions but you can't process the contents of that exception because you don't know the type of the exception (and it could be anything).
You should catch the exceptions you can handle fully or partially, rethrowing the partial exceptions. You should not catch any exceptions that you can't handle, because that will just obfuscate errors that may (or rather, will) bite you later on.
I would recommend against this practice. Putting code into try-catch blocks when you know the types of exceptions that can be thrown is one thing. It allows you, as you stated, to gracefully recover and/or alert the user as to the error. However, putting all your code inside such blocks where you don't know what the error is that may occur is using exceptions to control program flow, which is a big no-no.
If you are writing well-structured code, you will know about every exception that can occur, and can catch those specifically. If you don't know how a particular exception can be thrown, then don't catch it, just-in-case. When it happens, you can then figure out the exception, what caused it, and then catch it.
I guess the real answer is "It depends". If try-catch blocks are catching very generic exceptions then I would say it is defensive programming in the same way that never driving out of your neighborhood is defensive driving. A try-catch (imo) should be tailored to specific exceptions.
Again, this is just my opinion, but my concept of defensive programming is that you need fewer/smaller try-catch blocks not more/larger ones. Your code should be doing everything it can to make sure an exception condition can never exist in the first place.
In C++ the one reason to write lots of try/catch blocks is to get a stack trace of where the exception was thrown. What you do is write a try/catch everywhere, and (assuming you aren't at the right spot to deal with the exception) have the catch log some trace info then re-throw the exception. In this way, if an exception bubbles all the way up and causes the program to terminate, you'll have a full stack trace of where it all started to go wrong (if you don't do this, then an unhandled C++ exception will have helpfully unwound the stack and eradicated any possibility of you figuring out where it came from).
I would imagine that in any language with better exception handling (i.e. uncaught exceptions tell you where they came from) you'd want to only catch exceptions if you could do something about them. Otherwise, you're just making your program hard to read.
I've found "try" "catch" blocks to be very useful, especially if anything realtime (such as accessing a database) is used.
Too many? Eye of the beholder.
I've found that copying a log to Word, and searching with "find" -- if the log reader does not have "find" or "search" as part of its included tools -- is a simple but excellent way to slog through verbose logs.
It certainly seems "defensive" in the ordinary sense of the word.
I've found, through experience, to follow whatever your manager, team-leader, or co-worker does. If you're just programming for yourself, use them until the code is "stable" or in debug builds, and then remove them when done.

Why is exception handling bad? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Google's Go language has no exceptions as a design choice, and Linus of Linux fame has called exceptions crap. Why?
Exceptions make it really easy to write code where an exception being thrown will break invariants and leave objects in an inconsistent state. They essentially force you to remember that most every statement you make can potentially throw, and handle that correctly. Doing so can be tricky and counter-intuitive.
Consider something like this as a simple example:
class Frobber
{
int m_NumberOfFrobs;
FrobManager m_FrobManager;
public:
void Frob()
{
m_NumberOfFrobs++;
m_FrobManager.HandleFrob(new FrobObject());
}
};
Assuming the FrobManager will delete the FrobObject, this looks OK, right? Or maybe not... Imagine then if either FrobManager::HandleFrob() or operator new throws an exception. In this example, the increment of m_NumberOfFrobs does not get rolled back. Thus, anyone using this instance of Frobber is going to have a possibly corrupted object.
This example may seem stupid (ok, I had to stretch myself a bit to construct one :-)), but, the takeaway is that if a programmer isn't constantly thinking of exceptions, and making sure that every permutation of state gets rolled back whenever there are throws, you get into trouble this way.
As an example, you can think of it like you think of mutexes. Inside a critical section, you rely on several statements to make sure that data structures are not corrupted and that other threads can't see your intermediate values. If any one of those statements just randomly doesn't run, you end up in a world of pain. Now take away locks and concurrency, and think about each method like that. Think of each method as a transaction of permutations on object state, if you will. At the start of your method call, the object should be clean state, and at the end there should also be a clean state. In between, variable foo may be inconsistent with bar, but your code will eventually rectify that. What exceptions mean is that any one of your statements can interrupt you at any time. The onus is on you in each individual method to get it right and roll back when that happens, or order your operations so throws don't effect object state. If you get it wrong (and it's easy to make this kind of mistake), then the caller ends up seeing your intermediate values.
Methods like RAII, which C++ programmers love to mention as the ultimate solution to this problem, go a long way to protect against this. But they aren't a silver bullet. It will make sure you release resources on a throw, but doesn't free you from having to think about corruption of object state and callers seeing intermediate values. So, for a lot of people, it's easier to say, by fiat of coding style, no exceptions. If you restrict the kind of code you write, it's harder to introduce these bugs. If you don't, it's fairly easy to make a mistake.
Entire books have been written about exception safe coding in C++. Lots of experts have gotten it wrong. If it's really that complex and has so many nuances, maybe that's a good sign that you need to ignore that feature. :-)
The reason for Go not having exceptions is explained in the Go language design FAQ:
Exceptions are a similar story. A
number of designs for exceptions have
been proposed but each adds
significant complexity to the language
and run-time. By their very nature,
exceptions span functions and perhaps
even goroutines; they have
wide-ranging implications. There is
also concern about the effect they
would have on the libraries. They are,
by definition, exceptional yet
experience with other languages that
support them show they have profound
effect on library and interface
specification. It would be nice to
find a design that allows them to be
truly exceptional without encouraging
common errors to turn into special
control flow that requires every
programmer to compensate.
Like generics, exceptions remain an
open issue.
In other words, they haven't yet figured out how to support exceptions in Go in a way that they think is satisfactory. They are not saying that Exceptions are bad per se;
UPDATE - May 2012
The Go designers have now climbed down off the fence. Their FAQ now says this:
We believe that coupling exceptions to a control structure, as in the try-catch-finally idiom, results in convoluted code. It also tends to encourage programmers to label too many ordinary errors, such as failing to open a file, as exceptional.
Go takes a different approach. For plain error handling, Go's multi-value returns make it easy to report an error without overloading the return value. A canonical error type, coupled with Go's other features, makes error handling pleasant but quite different from that in other languages.
Go also has a couple of built-in functions to signal and recover from truly exceptional conditions. The recovery mechanism is executed only as part of a function's state being torn down after an error, which is sufficient to handle catastrophe but requires no extra control structures and, when used well, can result in clean error-handling code.
See the Defer, Panic, and Recover article for details.
So the short answer is that they can do it differently using multi-value return. (And they do have a form of exception handling anyway.)
... and Linus of Linux fame has called exceptions crap.
If you want to know why Linus thinks exceptions are crap, the best thing is to look for his writings on the topic. The only thing I've tracked down so far is this quote that is embedded in a couple of emails on C++:
"The whole C++ exception handling thing is fundamentally broken. It's especially broken for kernels."
You'll note that he's talking about C++ exceptions in particular, and not exceptions in general. (And C++ exceptions do apparently have some issues that make them tricky to use correctly.)
My conclusion is that Linus hasn't called exceptions (in general) "crap" at all!
Exceptions are not bad per se, but if you know they are going to happen a lot, they can be expensive in terms of performance.
The rule of thumb is that exceptions should flag exceptional conditions, and that you should not use them for control of program flow.
I disagree with "only throw exceptions in an exceptional situation." While generally true, it's misleading. Exceptions are for error conditions (execution failures).
Regardless of the language you use, pick up a copy of Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition). The chapter on exception throwing is without peer. Some quotes from the first edition (the 2nd's at my work):
DO NOT return error codes.
Error codes can be easily ignored, and often are.
Exceptions are the primary means of reporting errors in frameworks.
A good rule of thumb is that if a method does not do what its name suggests, it should be considered a method-level failure, resulting in an exception.
DO NOT use exceptions for the normal flow of control, if possible.
There are pages of notes on the benefits of exceptions (API consistency, choice of location of error handling code, improved robustness, etc.) There's a section on performance that includes several patterns (Tester-Doer, Try-Parse).
Exceptions and exception handling are not bad. Like any other feature, they can be misused.
From the perspective of golang, I guess not having exception handling keeps the compiling process simple and safe.
From the perspective of Linus, I understand that kernel code is ALL about corner cases. So it makes sense to refuse exceptions.
Exceptions make sense in code were it's okay to drop the current task on the floor, and where common case code has more importance than error handling. But they require code generation from the compiler.
For example, they are fine in most high-level, user-facing code, such as web and desktop application code.
Exceptions in and of themselves are not "bad", it's the way that exceptions are sometimes handled that tends to be bad. There are several guidelines that can be applied when handling exceptions to help alleviate some of these issues. Some of these include (but are surely not limited to):
Do not use exceptions to control program flow - i.e. do not rely on "catch" statements to change the flow of logic. Not only does this tend to hide various details around the logic, it can lead to poor performance.
Do not throw exceptions from within a function when a returned "status" would make more sense - only throw exceptions in an exceptional situation. Creating exceptions is an expensive, performance-intensive operation. For example, if you call a method to open a file and that file does not exist, throw a "FileNotFound" exception. If you call a method that determines whether a customer account exists, return a boolean value, do not return a "CustomerNotFound" exception.
When determining whether or not to handle an exception, do not use a "try...catch" clause unless you can do something useful with the exception. If you are not able to handle the exception, you should just let it bubble up the call stack. Otherwise, exceptions may get "swallowed" by the handler and the details will get lost (unless you rethrow the exception).
Typical arguments are that there's no way to tell what exceptions will come out of a particular piece of code (depending on language) and that they are too much like gotos, making it difficult to mentally trace execution.
http://www.joelonsoftware.com/items/2003/10/13.html
There is definitely no consensus on this issue. I would say that from the point of view of a hard-core C programmer like Linus, exceptions are definitely a bad idea. A typical Java programmer is in a vastly different situation, though.
Exceptions aren't bad. They fit in well with C++'s RAII model, which is the most elegant thing about C++. If you have a bunch of code already that's not exception safe, then they're bad in that context. If you're writing really low level software, like the linux OS, then they're bad. If you like littering your code with a bunch of error return checks, then they not helpful. If you don't have a plan for resource control when an exception is thrown (that C++ destructors provides) then they're bad.
A great use-case for exceptions is thus....
Say you are on a project and every controller (around 20 different major ones) extends a single superclass controller with an action method. Then every controller does a bunch of stuff different from each other calling objects B, C, D in one case and F, G, D in another case. Exceptions come to the rescue here in many cases where there was tons of return code and EVERY controller was handling it differently. I whacked all that code, threw the proper exception from "D", caught it in the superclass controller action method and now all our controllers are consistent. Previously D was returning null for MULTIPLE different error cases that we want to tell the end-user about but couldn't and I didn't want to turn the StreamResponse into a nasty ErrorOrStreamResponse object (mixing a data structure with errors in my opinion is a bad smell and I see lots of code return a "Stream" or other type of entity with error info embedded in it(it should really be the function returns the success structure OR the error structure which I can do with exceptions vs. return codes)....though the C# way of multiple responses is something I might consider sometimes though in many cases, the exception can skip a whole lot of layers(layers that I don't need to clean up resources on either).
yes, we have to worry about each level and any resource cleanup/leaks but in general none of our controllers had any resources to clean up after.
thank god we had exceptions or I would have been in for a huge refactor and wasted too much time on something that should be a simple programming problem.
Theoretically they are really bad. In perfect mathematical world you cannot get exception situations. Look at the functional languages, they have no side effects, so they virtually do not have source for unexceptional situations.
But, reality is another story. We always have situations that are "unexpected". This is why we need exceptions.
I think we can think of exceptions as of syntax sugar for ExceptionSituationObserver. You just get notifications of exceptions. Nothing more.
With Go, I think they will introduce something that will deal with "unexpected" situations. I can guess that they will try to make it sound less destructive as exceptions and more as application logic. But this is just my guess.
The exception-handling paradigm of C++, which forms a partial basis for that of Java, and in turn .net, introduces some good concepts, but also has some severe limitations. One of the key design intentions of exception handling is to allow methods to ensure that they will either satisfy their post-conditions or throw an exception, and also ensure that any cleanup which needs to happen before a method can exit, will happen. Unfortunately, the exception-handling paradigms of C++, Java, and .net all fail to provide any good means of handling the situation where unexpected factors prevent the expected cleanup from being performed. This in turn means that one must either risk having everything come to a screeching halt if something unexpected happens (the C++ approach to handling an exception occurs during stack unwinding), accept the possibility that a condition which cannot be resolved due to a problem that occurred during stack-unwinding cleanup will be mistaken for one which can be resolved (and could have been, had the cleanup succeeded), or accept the possibility that an unresolvable problem whose stack-unwinding cleanup triggers an exception that would typically be resolvable, might go unnoticed as code which handles the latter problem declares it "resolved".
Even if exception handling would generally be good, it's not unreasonable to regard as unacceptable an exception-handling paradigm that fails to provide a good means for handling problems that occur when cleaning up after other problems. That isn't to say that a framework couldn't be designed with an exception-handling paradigm that could ensure sensible behavior even in multiple-failure scenarios, but none of the top languages or frameworks can as yet do so.
I havent read all of the other answers, so this ma yhave already been mentioned, but one criticism is that they cause programs to break in long chains, making it difficult to track down errors when debugging the code. For example, if Foo() calls Bar() which calls Wah() which calls ToString() then accidentily pushing the wrong data into ToString() ends up looking like an error in Foo(), an almost completely unrelated function.
For me the issue is very simple. Many programmers use exception handler inappropriately. More language resource is better. Be capable to handle exceptions is good. One example of bad use is a value that must be integer not be verified, or another input that may divide and not be checked for division of zero... exception handling may be an easy way to avoid more work and hard thinking, the programmer may want to do a dirty shortcut and apply an exception handling... The statement: "a professional code NEVER fails" might be illusory, if some of the issues processed by the algorithm are uncertain by its own nature. Perhaps in the unknown situations by nature is good come into play the exception handler. Good programming practices are a matter of debate.
Exception not being handled is generally bad.
Exception handled badly is bad (of course).
The 'goodness/badness' of exception handling depends on the context/scope and the appropriateness, and not for the sake of doing it.
Okay, boring answer here. I guess it depends on the language really. Where an exception can leave allocated resources behind, they should be avoided. In scripting languages they just desert or overjump parts of the application flow. That's dislikable in itself, yet escaping near-fatal errors with exceptions is an acceptable idea.
For error-signaling I generally prefer error signals. All depends on the API, use case and severity, or if logging suffices. Also I'm trying to redefine the behaviour and throw Phonebooks() instead. The idea being that "Exceptions" are often dead ends, but a "Phonebook" contains helpful information on error recovery or alternative execution routes. (Not found a good use case yet, but keep trying.)