Juding whether an exception is exceptional - exception

It's a pretty popular and well known phrase that you should "only catch/throw exceptions which are exceptional". However, how is an "exceptional" exception determined?
For example, a bad password is very routine in logging into a service, so this is not exceptional. Statistics for a web app would probably show something like one bad login attempt for every 5 attempts (from no specific user). Likewise, with attempting to go to a checkout with a basket in an online store, this could be very commmon (especially for new users). However, a file not found could go either way. I usually work along the lines that if a method is missing something to do its work, throw an exception, but then it gets a little confusing here. In some cases, a file not found could be common (e.g. a file share used by many users with no tight controls), compared to a very locked down production environment missing a file, which would be exceptional.
Is this the right way to deduce between whether an exception is exceptional or not? I can easily filter things like no network connection etc as exceptional, but some cases are hard to judge. Is it subjective?
Thanks

I think it's pretty subjective, honestly, so I prefer to avoid that method of figuring out when I should use exceptions.
Instead, I prefer to consider three things:
Is it likely that I might want to let the call stack unwind more than one level?
Is there another way? (Return null or an error code, etc.) If so, do I have even the slightest performance concern?
If neither of those lead to a clear decision, which is easier to read by someone who has to maintain the code?
If #1 is true, and I don't have a MAJOR performance concern, I will probably opt to use exceptions because it will speed up my development time not to have to code return codes (and manually code the logic to have them propagate up the call stack if needed). When you use exceptions, call stack unwinding is free of charge for development time.
If #2 is true, and either I'm not going more than one frame (maybe two?) up the call stack or I have a serious performance concern (in a tight loop, for example), then I'll try really hard to find another way that doesn't involve exceptions.
Exceptions are only a tool for programmers in a language which supports them. I don't believe they have to have any intrinsic value as to what is "exceptional" or not. Instead, I say use them when they are the best tool for the job.

Related

Polymer1.0: Single/multiple IronAjax?

I am wondering if it wise to use single IronAjax for whole app? Otherwise I run into a situation where almost every view require an IronAjax. I mean its not hurting me creating multiple ironajax, but naming them, and naming success and error messages need to be unique. otherwise they can conflict on any other ironajax method which is defined within the tree. I ran into this situation before.
So I am not looking for "what is possible", I am looking for best practices.
A single Iron Ajax would indeed be more efficient, as long as the code in the error and response handlers do not need to be too different based on the URL retrieved. This question is mainly a philosophical argument and does not have a definitive answer since both ways possible and the direction depends on your comfort level of supporting the application.

Do preconditions ALWAYS have to be checked? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
These days I'm used to checking every single precondition for every function since I got the habit from an OS programming course back at uni.
On the other hand, at the software engineering course we were taught that a common precondition should only be checked once, so for example, if a function is delegating to another function, the first function should check them but checking them again in the second one is redundant.
I do see the redundancy point, but I certainly feel it's safer to always check them, plus you don't have to keep track of where they were checked previously.
What's the best practice here?
I have seen no "hard and fast" rule on how to check preconditions, but I generally treat it like method documentation. If it is publicly scoped, I assert that the preconditions are met. The logic behind this would be that the scope dictates that you are expecting consumption on a broader scale and with less influence.
Personally, the effort to put assertions around private methods is something I reserve for "mission critical" methods, which would basically be ones that either perform a critical task, are subject to external compliance requirements, or are non-recoverable in the event of an exception. These are largely "judgment calls".
The time saved can be reinvested in thorough unit and integration test enhancement to try and flush out these issues and puts the tooling in place to help enforce the quality of the input assertions as it would be consumed by a client, whether it is a class in your control or not.
I think it depends on how the team is organized: check inputs which come from outside your team.
Check inputs from end-users
Check inputs from software components written by other teams
Trust inputs received from within your own component / within your own team.
The reason for this is for support and maintenance (i.e. bug-fixing): when there's a bug report, you want to be able as quickly as possible to know which component is at fault, i.e. which team to assign the bug to.
if a function is delegating to another
function, the first function should
check them but checking them again in
the second one is redundant.
What if you change the way those functions call each other? Or you introduce new validation requirements in the second function, that the first one delegates to? I'd say it's safer to always check them.
I've made the habit of distinguishing between checking and asserting the preconditions, depending (as people pointed out in the comments) on whether a call comes from the outside (unchecked exception, may happen) or the inside (assert, shouldn't happen).
Ideally, the production system won't have the penalty of the asserts, and you could even use a Design By Contract(TM)-like mechanism to discharge the assertions statically.
In my experience, it depends on your encapsulation.
If the inner function is private then you can be pretty sure that its preconditions are set.
Guess it's all about the Law of Demeter, talk only to your friends and so on.
As a basis for best practise, if the call is public, you should check your inputs.
I think that best practice is to do these checks only if they're going to fail some day. If will help when you do the following.
Debugging
There's no point to check them when several private functions in one module, which has a single maintainer, exchange data. Of course, there are exceptions, especially if your language doesn't have a static type system, or your data are "stringly typed".
However, if you expose public API, one day, someone will fail your precondition. The further is the person that maintains the calling module from you (in organizational structure and in physical location), the more likely it will happen. And when it happens, a clear statement of precondition failure, with a specification where it happened, may save hours of debugging. The Law of Leaky Abstractions is still true...
QA
Precondition failure helps QA to debug their tests. If a unit-test for a module causes the module to yield precondition failure, it means that the test is incorrect, not your code. (Or, that your precondition check is incorrect, but that's less likely).
If one of the means to perform QA is static analysis, then precondition checks, if they have a specific notation (for example, only these checks use assert_precondition macro), will also help. In static analysis it's very important to distinguish incorrect input and source code errors.
Documentation
If you don't have much time to create documentation, you may make your code aid the text that accompanies it. Clear and visible precondition checks, which are perceived separate from the rest of the implementation, "document" possible inputs to some extent. (Another way to document your code this way is writing unit tests).
As with everything, evaluate your requirements to find the best solution for each situation.
When preconditions are easier to check ("pointer isn't null"), you might as well do that often. Preconditions which are hard to check ("points to a valid null-terminated string") or are expensive in time, memory, or other resources may be handled a different way. Use the Pareto principle and gather the low-hanging fruit.
// C, C++:
void example(char const* s) {
// precondition: s points to a valid null-terminated string
assert(s); // tests that s is non-null, which is required for it to point to
// a valid null-terminated string. the real test is nearly impossible from
// within this function
}
Guaranteeing preconditions is the responsibility of the caller. Because of this, several languages offer an "assert" construct that can optionally be skipped (e.g. defining NDEBUG for C/C++, command-line switch for Python) so that you can more extensively test preconditions in special builds without impacting final performance. However, how to use assert can be a heated debate—again, figure out your requirements and be consistent.
It is a little bit old question, but no, preconditions do not have to be checked every time. It really depends.
For example, what if you have binary search over vector. Precondition is sorted vector. Now, if you check every time if vector is sorted this takes a linear time (for each vector), so it is not efficient. Client must be aware of precondition and be sure to meet it.
The best practice is to always check them.

What's a good approach to writing error handling?

I hate writing error condition code. I guess I don't have a good approach to doing it:
Do you write all of your 'functional'
code first then go back in and add
error handling or vice versa?
How stupid do you assume your users
are?
How granular do you make your
exception throws?
Does anyone have sage advice to pass on to make this easier?
A lot of great answers guys, thank you. I had actually gotten more answers about dealing with the user than I thought. I'm actually more interested in error handling on the back end, dealing with database connection failures and potential effects on the front end, etc. Keep them coming!
I can answer one question: You don't need to assume your users are "stupid", you need to help them to use your application. Show nice prompts for things, validate data and explain why, so it's obvious to them, don't crash in their face if you can't handle what they've done (or more specifically, what you've let them do), show a nice page explaining what they can do instead, and so on.
Treat them with respect, and don't assume they know everything about your system, you are here to help them.
In respect to the first part; I generally write most error-handling at the time, and add a little bit back in later.
I generally don't throw that many exceptions.
Assume your users don't know anything and will break your system any way that it can possibly be broken.
Then write your error handling code accordingly.
First, and foremost, be clear to the user on what you expect. Second, test the input to verify it contains data within the boundaries you expect.
Prime example, I had a form with an email field. We weren't immediately using that data so we didn't put any checking on it. Result: about 1% of the users put in their home address. The field was labeled "Email Address" Apparently the users were just reading the second word and ignoring the first.
The fix was to change the label to simply say "Email" and then test the input. For kicks we went ahead and logged what the users were initially typing in that field just to see if the label change helped. It did.
Also, as a general practice, your functions should test the inputs to verify they contain the data you expect. Use asserts or their equivalent in your language of choice.
When i code, there will be some exceptions which i will expect, i.e. a file may be missing, or some xml serialisation may fail. Those exceptions i know will happen ahead of time, and i can put in handling for them.
There is a lot you cannot anticipate though, and nor should you try to. Put in a global error handler and logger, so that ultimately everything gets caught and logged. Then as your testers and/or users find situations that cause exceptions (i.e. bad input) then you can decide whether you want to put further handling in specifically for it, or maybe leave it as it was.
Summary: validate your input, but don't try to gaze into the crystal ball too much, as you will never anticipate every issue that the user may come up with. Have a global handler and logger, and then refine as necessary.
I have two words for you: Defensive Coding
You have to assume your users are incredibly stupid. Someone will always find a way to give you input that you thought would never happen.
I try to make my exception throws as granular as possible to provide the best feedback when something goes wrong. If you lump everything together, you can't tell which error case caused the problem.
I usually try to handle error cases first (before getting functional code), but that's not necessarily a best practice.
Someone has already mentioned defensive programming. A few thoughts from a user experience perspective, though.
If the user's input is invalid, either (a) correct it if you're reasonably sure you knew what they meant or (b) display a message in line that tells them what corrective action they should take.
Avoid messages like, "FATAL SYSTEM ERROR CODE 02382981." Users (a) don't know what this means, even if you do, and (b) are intimidated and put off by seeing things like this.
Avoid pop-up messages for every—freaking—possible—error you can come up with. You shouldn't disrupt user flow unless you absolutely need them to resolve a problem before they can do anything else.
Log, log, log. When you show an error message to the user, put relevant information that might help you debug in either (a) a log file or (b) a database, depending on the type of application you're creating. This will ease the effort of hunting down information about the error without making the user cry.
Once you identify what your users should and should not be able to do, you'll be able to effectively write error handling code. You can make this easier on yourself with helper methods/classes.
In terms of your question about writing handling before/after/during business logic, think about it this way: if you're making 400,000 sandwiches, it might be faster to add all the mustard at the same time, but it's probably also a lot more boring than making each sandwich individually. Who knows, though, maybe you really like the smell of mustard...

Misunderstood Good Ideas

Sometimes someone has a great idea that solves a problem. But as time passes, people forget why it was a great idea and try to use it in ways that end up causing problems as bad (or worse) than what the idea was originally supposed to solve.
Example:
I'm sure that distributed source
control is sufficiently
counterintuitive that people try to
establish conventions that defeat the
point of distributed source control.
Example 2:
it's very natural to think that when
you're writing some code, you should
handle all errors that could possibly
arrise. But a function doesn't always
have enough information to handle the
error properly, so all it can do is
somehow tell whoever called it that
the error occured. Passing errors up
the call stack by hand is tedious, so
exceptions were invented. With no
extra typing on the part of the
programmer, exceptions will bubble up
the call stack until somebody can do
something with them. It seems like
checked exceptions, at least in
practice, tarnish the awesomeness of
exceptions. At best, the programmer
has to tediously work her way up any
possible call stack, specifying that
every method throws a given exception
up to the point where it can be
handled (if it can be handled). Worse,
she might swallow the exception to
avoid the chore!
What are some other examples where an approach that seems like the common-sense thing to do is actually recreating a problem that had been solved in some way?
Point of this question: internalizing what is wrong with the common-sense "obvious" solution is a very good way of developing an intuition for how and why to use the initially counterintuitive elegant solution.
Hmmm..... let me think... what's with the foundation on which the web works - stateless HTTP on which many stateful frameworks have been built (ASP.NET, JSF etc.) that completely discard the stateless nature of the protocol? Well, not discard it in their implementation but discard it for their users - developers, who not even knowing anything about basic web elements try to pack megabytes of serialized objects into pages which leads to performance loss and tremendous consumption of traffic and server resources.
Would it fall into you conception definition?
You're right that there's many examples of this with DVCS. The most common one is to use DVCS like Subversion by always pushing on any commit or going days without even bothering to commit.

What are the best practices for Design by Contract programming

What are the best practices for Design by Contract programming.
At college I learned the design by contract paradigma
(in an OO environment)
We've learned three ways to tackle the problem :
1) Total Programming : Covers all possible exceptional cases in its
effect (cf. Math)
2) Nominal Programming : Only 'promises' the right effects when the preconditions are met. (otherwise effect is undefined)
3) Defensive Programming : Use exceptions to signal illegal invocations of methods
Now, we have focussed in different OO scenarios on the correct use in each situation, but we haven't learned WHEN to use WHICH...
(Mostly the tactics where inforced by the exercice..)
Now I think it's very very strange that I haven't asked my teacher (but then again, during lectures, noone has)
Personally, I never use nominal now, and tend to replace preconditions with exceptions (so i rather use : throws IllegalDivisionByZero, than stating 'precondition : divider should differ from zero) and only program total what makes sense (so I wouldn't return a conventional value on division by zero), but this method is just based on personal findings and likes.
so I am asking you guys :
Are there any best practises??
I didn't know about this division, and it doesn't really reflect my experience.
Total Programming is virtually impossible. You could not guarantee that you cover all exceptional cases. So basically you should limit your scope and reject the situations that are out of scope (that's the role of the Pre-conditions)
Nominal Programming is not desired. Undefined effect should be banned.
Defensive Programming is a must. You should always signal illegal invocations of methods.
I'm in favour of the implementation of the complete Design-by-Contract elements, which is, in my opinions a practical and affortable version of the Total Programming
Preconditions (a kind of Defensive Programming) to signal illegal invocation of the method. Try to limit your scope as much as you can so that you could simplify the code. Avoid complex implementation if possible by narrowing a little bit the scope.
Postconditions to raise an error if the desired effect is not obtained. Even if it your fault, you should notify the caller that you miss your goal.
Invariants to check that the object consistency is preserved.
It all boils down to what responsibilities do you wish to assign to the client and the implementer of the contract.
In defensive programming you force the implementer to check for error conditions which can be costy or even impossible in some cases. Imagine a contract specified by the binarySearch for example your input array has to be sorted. you can't detect this while running the algorithm. you have to do a manual check for it which will actually bump the execution time an order of magnitude. to back my opinion up is the signature of the method from the javadocs.
Another point is People and frameworks now tend to implement exception translation mechanisms which is used mainly to translate checked exceptions (defensive style) to runtime exceptions that will just pop up if something wrong happens. this way the client and implementer of the contract has less to worry about while dealing with each other.
Again this is my personal opinion backed only with what limited experience I have, I'd love to hear more about this subject.
...but we haven't learned WHEN
to use WHICH...
I think the best practice is to be "as defensive as possible". Do your runtime checks if you can. As #MahdeTo has mentioned sometimes that's impossible for performance reasons; in such cases fall back on undefined or unsatisfactory behavior.
That said, be explicit in your documentation as to what has runtime checks and what does not.
Like much of computing "it depends" is probably the best answer.
Design by contract/programming by contract can help development greatly by explicitly documenting the conditions for a function. Just the documentation can be a help without even making it into (compiled) code.
Where feasible I recommend defensive - checking every condition. BUT only for development and debug builds. In this way most invalid assumptions are caught when the conditions are broken. A good build system would allow you to turn the different condition types on and off at a module or file level - as well as globally.
The actions taken in release versions of software then depend upon the system and how the condition is triggered ( usual distinction between external and internal interfaces ). The release version could be 'total programming' - all conditions give a defined result (which can include errors or NaN)
To me "nominal programming" is a dead end in the real world. You assume that if you passed the right values in (which of course you did) then the value you receive is good. If your assumption was wrong - you break down.
I think that test driven programming is the answer. Before actually implementing the module, you first create a unit test (call it a contract). Then gradually implement the functionality and make sure the contract is still valid as you go. Usually I start with plain stubs and mockups, then gradually fill out the rest replacing the stabs with real stuff. Keep improving and making the test stronger. At the end you end up with a robust implementation of said module plus you've got a fantastic test bed - coded implementation of the contract. Later on, if someone modifies the module, first you see if it can still fit the test bed. If it doesn't, the contract is broken - reject the changes. Or, the contract is outdated, - fix the unit tests. And so on.. Boring cycle of software development :)