Sometimes while writing Java in Eclipse, I write code that generates warnings. A common one is this, which I get when extending the Exception class:
public class NumberDivideException extends Exception {
public NumberDivideException() {
super("Illegal complex number operation!");
}
public NumberDivideException(String s) {
super(s);
}
} // end NumberDivideException
The warning:
The serializable class NumberDivideException does not declare a static final serialVersionUID field of type long.
I know this warning is caused by my failure to... well, it says right above. I could solve it by including the serialVersionUID, but this is a one hour tiny assignment for school; I don't plan on serializing it anytime soon...
The other option, of course, is to let Eclipse add #SuppressWarnings("serial").
But every time my mouse hovers over the Suppress option, I feel a little guilty.
For programming in general, is it a good habit to suppress warnings?
(Also, as a side question, is adding a "generated" serialVersionUID like serialVersionUID = -1049317663306637382L; the proper way to add a serialVersionUID, or do I have to determine the number some other way?)
EDIT: Upon seeing answers, it appears my question may be slightly argumentative... Sorry!
Too late for me to delete though...
In the particular case of eclipse, rather than suppress warnings as they happen, I prefer setting up eclipse to emit warnings I care about and automatically ignore all instances of the ones I don't. See Windows ->
Preferences -> Java -> Compiler -> Error/Warnings
That's rather specific to Java though, and I find that Java tends to have many more warnings I don't care about than most other languages. In other languages, I usually have all warnings on and fix them as they come up
It's very satisfying to have code compile with out warnings - and when you do get one it then stands out and may alert you to a problem in the code
You shall supress warnings ONLY if you are absolutely sure of what you are doing
and it whould be better to have this kind of thing documented for future alterations (java doc comments for example)
this way you hide the warnings you know won´t cause problems and can focus on the ones that will cause you problens
If it's a program you're ever going to look at again, it will pay off to do things 'the right way' -- and that means not suppressing warnings, but fixing them.
For school assignments however, you are often asked to reinvent the wheel/complete trivial tasks, and so you shouldn't feel any qualms about "hacking" stuff together. Unless of course you are graded on coding style...
You should never suppress warnings globally even just one particular kind of warning. You should also set your compiler to be as picky as possible. Warnings are there to tell you about problems that may exist. You can either refactor your code to get rid of them or, in some languages, add some sort of directive to ignore a specific bit of code that causes a specific warning. This will allow you to review a warning and ignore it if you know it is OK.
Wherever practical, suppress warnings only on a particular line, or - at most - a particular file.
Not only does this ensure that warnings that you WEREN'T expecting get through to you, but it serves as an explicit design note - pointing out to the next person editing the code (or yourself in a month's time) that you were aware of the issue, and took a deliberate decision that it was acceptable for this code. That saves the next person from having to evaluate the situation again.
(I don't know enough about Eclipse to make this happen - this is a general principle that applies across languages.)
Related
I derived several classes from various exceptions. Now VS gives warning as in the title of this question.
Could someone explain what implications of suppressing this rule?
Could you explain rule from here saying "Do not suppress a warning from this rule for exception classes because they must be serializable to work correctly across application domains."?
P.S. Well I've got an answer myself. You indeed have to mark exceptions as serializable. They work fine without this attribute in same AppDomain. However if you try to catch it from some other domain, it will have to get serialized in order to get across app boundaries. And that is the main reason I found for this.
This is not exactly a Visual Studio warning, it is a warning produced by the FxCop tool. Which you can run from the VS Analyze menu. FxCop is a static analyzer that looks for common gotchas in a .NET program that a compiler won't flag. Most of its warnings are pretty obscure and are rarely really serious problems, you need to treat it as a "have you thought of this?" kind of tool.
The little factoid it is trying to remind you about here is that the Exception class implements ISerializable and has the [Serializable] attribute. Which is a pretty hard requirement, it makes the base Exception object serializable across app-domains. Necessary because Exception doesn't derive from MarshalByRefObject. And necessary to allow code that you run in another app domain to throw exceptions that you can catch.
So FxCop notes that you didn't do the same for your own Exception derived class. Which is really only a problem if you ever intend to have code that throws your exception run in another app-domain. FxCop isn't otherwise smart enough to know if you do so it can only remind you that it goes wrong when you do. It is pretty uncommon so feel free to ignore the warning when you just don't know yet whether you will or not or if it all sounds like Chinese to you.
If you're not going to use multiple AppDomain in your application, I think you can ignore it or suppress.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
On my previous job, providing all methods with javadoc was mandatory, which resulted in things like:
/**
* Sets the Frobber.
*
* #param frobber The frobber
*/
public setFrobber(Frobber frobber) { ... }
As you can see, the documentation takes up space and work, but adds little to the code.
Should documenting all methods be mandatory or optional? Is there a rule for which methods to document? What are pros and cons of requiring every method to be documented?
"providing all methods with javadoc was mandatory"
I strongly suspect that documenting all methods was mandatory, but providing javadoc comments was all that could be automatically enforced and hence all that was uniformly done.
Personally I think it's better to have no javadoc than completely useless javadoc - at least you can see from a glance at the HTML which methods are undocumented, because there are no descriptions of the parameters etc.
Documentation is frequently underrated, because it always seems less important and urgent when you're writing the code, than it does when you're using it later. But the style and form of documentation is often overrated - auto-generated XML nonsense is still nonsense. Given the choice, I'd rather have the code comment // Sets this object to use the specified frobber for all future frobbing, than your zero-information javadoc.
For all I know from your docs, the function doesn't actually modify this object at all, it might call the set() function on frobber, or it might be while(!frobber.isset()) { refrigerator.add(frobber); sleep(3600); refrigerator.remove(frobber); } Hence it "sets the frobber". I'm sure I read somewhere that "set" is the word with the most distinct definitions in the OED. Brief descriptions are ambiguous and hence misleading, and the purpose of documentation is to stop people relying on your source, and hence on details of your current implementation. My comment doesn't really take any longer to write than it took to add "Sets the frobber" and "the frobber" to the IDE-generated javadoc stub. It doesn't explain what frobbing is or when this object does it (hopefully that's elsewhere in the class docs) but at least it tries to tell you what the function does.
As for when to mandate documentation - I think every interface must be documented. If you're not defining Java interface s, the "interface" is every public and protected method, and every package-protected method unless the package is tiny. Implementation doesn't have to be documented, although it should be commented if the way it works is non-obvious. Documentation might be as simple as the sentence in my comment above - you don't necessarily need a separate sentence for each parameter if the method description already says what they are.
If you have code review, then IMO the answer is to review comments and documentation at the same time. If you don't have code review, then you need a cone of shame for whichever developer most recently forced someone else to come over and ask what the code actually does.
The same applies to anyone who relied on undocumented behaviour of a function, with a result that an implementation change that didn't change the interface, breaks their code. The way you enforce that code be documented, is to complain that you can't call it until you know what it guarantees to do. Arbitrary rules like, "javadoc comments must exist" become less important, at least for functions that other developers need to call.
For big projects or frameworks/libraries or even open source project that you are creating, it is mandatory. For small personal or private projects it is optional. Having said that, it is always a good idea to document your code so if you come back to your project after a year whether small or big, you know what it was doing. This really helps greatly.
You should always document your code. especially if someone else work or will work on your code. Maybe you didn't have a chance yet to work on legacy not-documented code but it can be a real pain!
About the comment itself, one thing to avoid is writing a comment because it is mandatory, Just think a few second and you'll find something to tell about your method, something that's not already in the method name, something that might not be obvious to the next developer. Explain what your method does, what are the corner cases, what it expect as input.
And remember :
Always code as if the guy who ends up
maintaining your code will be a
violent psychopath who knows where you
live.
it applies to comments too :)
It's much easier to maintain "self-documenting" code. If you choose good function and variable names, keep functions short (eg. < 10 lines with only a single idea per function), this will help keep the purpose of the code clear. And you won't have to try to keep the comments up to date - the only thing worse than no comments is comments that are wrong!
There's a good and recent summary of various points of view at InfoQ.
Documentation of code is very important. But Javadoc (or similar tools) are not the only and not the best method for this. The biggest downside is, that Javadoc-documentation must be kept up to date. If the method is changed, but the description stays the same, this documentation can do more trouble than good.
To avoid the problem with documentation not in sync with the code, use code to document. Unit-tests show how your code is used and asserts in the code can ensure that parameters and return-values are validated. In a project I added asserts to a calculation, that the probabilities in this calculation are always between 0 and 1. Later this assert triggered in a use case and pointed me directly to a bug.
The most important documentation is a good naming. If you set a Frobber, then setFrobber is a good name. The Javadoc given in your example adds nothing to this naming. frobIt would be a not so good name, method3 would be very bad. Code reviews should help to get good naming.
Javadocs and ither documentation should be added, if the other methods aren't sufficient. But in this case you need to take care, that this documentation is always up to date.
Q: Should documenting all methods be mandatory or optional?
A: Mandatory.
Q: Is there a rule for which methods to document?
A: All of them.
Q: What are pros and cons of requiring every method to be documented?
A: Pros: Smart people can spend time focusing on code writing, not code figuring-out. Code is well explained. Code can be passed to newbies. Cons: Whining. Stale comments.
A focus on quality commenting obviates the 'code is self-documenting' issues.
In the case of getters and setters, not every get and set is trivial. Sometimes it is, that's great. When it isn't, the comment should note the information. It's better to be conservative and always have comments than unconservative and have to scrap code and waste time figuring it out.
Final example: The Carmack Inverse Square Root code. Self-documenting, eh?
Imagine i have a function with a bug in it:
Pseudo-code:
void Foo(LPVOID o)
{
//implementation details omitted
}
The problem is the user passed null:
Object bar = null;
...
Foo(bar);
Then the function might crash due to a access violation; but it could also happen to work fine. The bug is that the function should have been checking for the invalid case of passing null, but it just never did. It was never issue because developers were trusted to know what they're doing.
If i now change the function to:
Pseudo-code:
void Foo(LPVOID o)
{
if (o == null) throw new EArgumentNullException("o");
//implementation details omitted
}
then people who were happily using the function, and happened to but not get an access violation, now suddenly will begin seeing an EArgumentNullException.
Do i continue to let people using the function improperly, and create a new version of the function? Or do i fix the function to include what it should have originally had?
So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one?
Consider a bug so common that Microsoft had to fix it for developers:
MessageBox(GetDesktopWindow, ...);
You never, ever, ever want to make a window model against the desktop. You'll lock up the system. Do you continue to let developers lock up the user's computer? Or do you change the function to:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
throw new Exception("hWndParent cannot be the desktop window. Use NULL instead.");
...
}
In reality Microsoft changed the Window Manager to auto-fix the bad parameter:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
hWndParent = 0;
...
}
In my made up example there is no way to patch the function - if i wasn't given an object, i can't do what i need to do on it.
Do you risk breaking existing code by adding parameter validation? Do you let existing code continue to be wrong, getting incorrect results?
The problem is that not only are you fixing a bug, but you are changing the semantic signature of the method by introducing an error case.
From a software engineering perspective I would advocate that you try to specify methods as best as possible (for instance using pre and post-conditions) but once the method is out there, specification changes are a no-go (or at least you would have to check all occurrences of the method) and a new method would be better.
I'd keep the old function and simply let it create a warning that notifies you of every (possibly) wrong use and then i'd just kick the developer who used it wrong until he uses it properly.
You cannot catch everything. What if someone wrote "MakeLocation("Ian Boyd", "is stupid");"? Would you create a new function or change the function to catch that? No, you would fire the developer (or at least punish him).
Of course this requires that you document what your function requires as input.
This is where having automated tests [Unit testing, Intergration Testing, automated functional testing] are great, they give you the power to change existing code with confidance.
When making changes like this I would suggest finding all usages and ensuring they are behaving how you belive they should.
I myself would make bug fixes to existing function rather them duplicating them 99% of the time. If it changes behavior alot and there are a lot of calls to this function you need to be very sure of your change.
So go ahead make your change, run your unit tests, then your automated functional tests. Fix any errors and your golden!
If your code has a bug in it you should do what you normally do when any bug is reported. One part of that is assessing the impacts of fixing and of not fixing it. Sometimes the right thing to do with a bug is to not fix it because the behaviour it exposes has become accepted. Sometimes the cost of fixing it, or the iconvenience of releasing a fix outside the normal release cycle, stops you releasing a fixed bug for a while. This isn't a moral dilemma, it's an economic question of costs and benefits. If you are disturbed at the thought of having known bugs in your published code, publish a known-bugs list.
One option none of the other respondents seems to have suggested is to wrap the buggy function in another function which imposes the new behaviour that you require. In the world where functions can run to many lines it is sometimes less likely to introduce new bugs to preserve a 99%-correct piece of code and address the change without modifying existing code. Of course, this is not always possible
Two choices:
Give the error checking version a new name and deprecate the old version (one version later have it start issuing warnings (compile time if possible, run time if necessary), two versions later remove it).
[not always possible] Place the newly introduced error check in such a way that it only triggers if the unmodified version would crash or produce undefined behavior. (So that users who were taking care in their code don't get any unpleasant surprises.)
It entirely depends on you, your codebase, and your users.
If you are Microsoft and you have a bug in your API that is used by millions of devs around the world, then you will probably want to just create a new function and update the docs on the old one. If you can, you would also want to update the compiler to give warnings as well. (Though even then you may be able to change the existing system; remember when MS switched VC to the C++ standard and you had to update all of your #include iostreams and add using stds to get simple, existing console apps working again?)
It basically depends on what the function is. If it is something basic that will have massive ripple effects, then it could break a lot of code. If it is just an ancillary function, then you may as well fix it. Of course if you are Microsoft and your other code depends on a bug in one of your functions, then you probably should fix it since that is just plain embarrassing to keep. If other devs rely on the bug (that you created), then you may have an obligation to the users to not break their code that you caused to be buggy.
If you are a small company or independent developer, then sure, go ahead and fix the function. If you only need to update yourself or a few people on the new usage then fixing it is the best solution, especially since it is not even a big deal because all it really requires is an added note to the docs for the function. eg do not pass NULL or an exception is thrown if hWnd is the desktop, etc.
Another option as a sort of compromise would be to create a wrapper function. You could create a small, inline function that checks the args and then calls the existing function. That way you don’t really have to do much in the short term and eventually when people have moved to the new one, you can deprecate or even remove the old one, moving the code to the new once between the checks.
In most scenarios, it is better to fix a buggy function, particularly if you are merely adding argument checks as opposed to completely changing the behavior of the function. It is not really a good idea to facilitate—read encourage—bad coding just because it would break some existing code (especially if the code is free!) Think about it: if someone is creating a new program, then they can do it right from the start instead of relying on a bug. If they are re-compiling an old program that depends on the bug, then they can just update the code. Again, it depends on how messy and convoluted the code is, how many people are affected, and whether or not they pay you, but it is quite common to have to update old code to for example initialize variables that hand’t been, or check for error codes, etc.
To sum up, in your specific example (given the information provided), you should just fix it.
This one has been puzzling my for some time now.
Let's imagine a class which represents a resource, and in order to be able to use this resource one needs to first call the 'Open' method on it, or an InvalidOperationException will be thrown.
Should my code also check whether someone tries to open an already open resource, or close an already closed one?
Should code prevent a logically invalid invocation even when no harm would be done?
I think that programming this way would help writing better code at the other side, but I feel that I might be taking too much responsibility and affect reusability.
What do you guys think?
Edit:
I don't think this could be called defensive programming because it won't let a possible bad use to slip either, and another InvalidOperationException will be thrown.
This is called defensive programming. That's a good programming practice because you ensure that your application doesn't crash on misbehaviour.
That some method should be called first before another method is called, is not a good programming practice. It add's a lot of complexity, which is better handled by the class itself.
This is called sequential coupling. This wikipedia article says that it depends on the context if it's a bad practice, but it shouldn't crash when handled improperly. Sometimes it's necessary to throw an exception to make things clear.
This really depends on what the class actually does. In some cases failing silently is a good idea (eg, you want your DVD player to continue working, not show an error message if it opens the DVD tray that is already open) and in other cases you want as much information as possible (eg, if an airplane tries to close a door that is reportedly already closed, then something is wrong and the pilot should be alerted).
In most cases throwing an error when a logically invalid action is performed is useful for developers, so implementing those exceptions depends on who will use the code. If it is used internally for one application, then it's not vital. But if it is used by many different projects or developers, then I would look into it.
If your example is really the case, then the Open functionality should probably be invoked by the class's constructor.
If you consider the C++ iostream library (which is very widely used and considered quite a good example) you can call any operation on a stream class, whether it is open or not. The called function will simply return a failure indicator of some sort if the operation could not be performed. The functions must of course test the stream state in order to do this.
What you must not do is allow your programs to silently accept any old input as parameters. For example, this would be a broken implementation of strlen()
int strlen( const char * s )
{
if ( s == 0 )
{
return 0; // bad
}
else
{
// calculate length not shown
}
}
as it fields bad inputs without causing a fuss - it should instead throw an exception or use an assert(), depending on your exact development philosophy.
There's no substitute for taste, talent and experience in figuring out exactly how many safety checks should be in your code for best cost/benefit ratio for your organization.
A good quality APIs are expected to be fool-proof, and to guide the user with proper amount of warnings.
Sometimes, safety precautions may impair performance. Performance is one of the most counter-intuitive things in programming. Optimize with care, only when performance really matters.
If this is part of a public SDK that you're releasing to the wild, then the exposed API calls should have strong validation. It will help your 'users' (who are developers) and ensure you aren't stuck supporting usage you never intended to support.
Otherwise, I would not add such checks. I think they make the code harder to read, and these checks are rarely tested. In the past I would add a lot of code like this to make sure my code doesn't do the wrong thing. Now I write unit tests to verify my code does the right thing. The difference? I think tests are more maintainable, more readable, and they don't clutter your production code.
In the case of opening a file that is already open, it depends on knowing the effect of the request, will it reset the current read location for example.
In the case of closing a file that is already closed, think of it as a request for the file to be put in a known state. The code doesn't have to do anything but the desired state is acheived so the code can return a success condition. This is not true if there is some sort of file buffering that needs to be taken care of or maybe an interlinked resource to coordinate, like a modem/serial port or a printer/spooler.
Step back and think of the problem in terms of the desired outcome including any side-effects.
We once put a 'logout' link on an app menu that was displayed regardless of your login status. Why? Because it only took a simple (and very short) method to handle returning you to the login screen from the login screen and saved a large number of checks to handled tracking the login status just so the 'logout' menu-item was displayed only when you were logged in.
logical invalid invocations should always be reported to the user in debug mode..
When compiled in release mode, your code should not throw any unneeded exceptions or do anything else which could endanger the whole application.
Personally i prefer having some kind of logfile, and logging such logically invalid invocations surely will do no harm (at least when performance is not important)
I understand that "Exceptions are for exceptional cases" [a], but besides just being repeated over and over again, I've never found an actual reason for this fact.
Being that they halt execution, it makes sense that you wouldn't want them for plain conditional logic, but why not input validation?
Say you were to loop through a group of inputs and catch each exception to group them together for user notification... I continually see that this is somehow "wrong" because users enter incorrect input all the time, but that point seems to be based on semantics.
The input is Not what was expected and hence is exceptional. Throwing an exception allows me to define exactly what was wrong like StringValueTooLong or or IntegerValueTooLow or InvalidDateValue or whatever. Why is this considered wrong?
Alternatives to throwing an exception would be to either return (and eventually collect) an error code or far worse an error string. Then I would either show those error strings directly, or parse the error codes and then show corresponding error messages to the user. Wouldn't a exception be considered a malleable error code? Why create a separate table of error codes and messages, when these could be generalized with the exception functionality already built into my language?
Also, I found this article by Martin Fowler as to how to handle such things - the Notification pattern. I'm not sure how I see this as being anything other than Exceptions that don't halt execution.
a: Everywhere I've read anything about Exceptions.
--- Edit ---
Many great points have been made. I've commented on most and +'d the good points, but I'm not yet completely convinced.
I don't mean to advocate Exceptions as the proper means to resolve Input Validation, but I would like to find good reasons why the practice is considered so evil when it seems most alternate solutions are just Exceptions in disguise.
Reading these answers, I find it very unhelpful to say, "Exceptions should only be used for exceptional conditions". This begs the whole question of what is an "exceptional condition". This is a subjective term, the best definition of which is "any condition that your normal logic flow doesn't deal with". In other words, an exceptional condition is any condition you deal with using exceptions.
I'm fine with that as a definition, I don't know that we'll get any closer than that anyway. But you should know that that's the definition you are using.
If you are going to argue against exceptions in a certain case, you have to explain how to divide the universe of conditions into "exceptional" and "non-exceptional".
In some ways, it's similar to answering the question, "where are the boundaries between procedures?" The answer is, "Wherever you put the begin and end", and then we can talk about rules of thumb and different styles for determining where to put them. There are no hard and fast rules.
A user entering 'bad' input is not an exception: it's to be expected.
Exceptions should not be used for normal control flow.
In the past many authors have said that Exceptions are inherently expensive. Jon Skeet has blogged contrary to this (and mentioned a few time in answers here on SO), saying that they are not as expensive as reported (although I wouldn’t advocate using them in a tight loop!)
The biggest reason to use them is ‘statement of intent’ i.e. if you see an exception handling block you immediately see the exceptional cases which are dealt with outside of normal flow.
There is one important other reason than the ones mentioned already:
If you use exceptions only for exceptional cases you can run in your debugger with the debugger setting "stop when exception is thrown". This is extremely convenient because you drop into the debugger on the exact line that is causing the problem. Using this feature saves you a fair amount of time every day.
In C# this is possible (and I recommend it wholeheartedly), especially after they added the TryParse methods to all the number classes. In general, none of the standard libraries require or use "bad" exception handling. When I approach a C# codebase that has not been written to this standard, I always end up converting it to exception-free-for-regular cases, because the stop-om-throw is so valuable.
In the firebug javascript debugger you can also do this, provided that your libraries don't use exceptions badly.
When I program Java, this is not really possible because so many things uses exceptions for non-exceptional cases, including a lot of the standard java libraries. So this time-saving feature is not really available for use in java. I believe this is due to checked exceptions, but I won't start ranting about how they are evil.
Errors and Exceptions – What, When and Where?
Exceptions are intended to report errors, thereby making code more robust. To understand when to use exceptions, one must first understand what errors are and what is not an error.
A function is a unit of work, and failures should be viewed as errors or otherwise based on their impact on functions. Within a function f, a failure is an error if and only if it prevents f from meeting any of its callee’s preconditions, achieving any of f’s own postconditions, or reestablishing any invariant that f shares responsibility for maintaining.
There are three kinds of errors:
a condition that prevents the function from meeting a precondition (e.g., a parameter restriction) of another function that must be called;
a condition that prevents the function from establishing one of its own postconditions (e.g., producing a valid return value is a postcondition); and
a condition that prevents the function from re-establishing an invariant that it is responsible for maintaining. This is a special kind of postcondition that applies particularly to member functions. An essential postcondition of every non-private member function is that it must re-establish its class’s invariants.
Any other condition is not an error and should not be reported as an error.
Why are Exceptions said to be so bad for Input Validation?
I guess it is because of a somewhat ambiguous understanding of “input” as either meaning input of a function or value of a field, where the latter should’t throw an exception unless it is part of a failing function.
Maintainability - Exceptions create
odd code paths, not unlike GOTOs.
Ease of Use (for other classes) -
Other classes can trust that
exceptions raised from your user
input class are actual errors
Performance - In most languages, an
exception incurs a performance and
memory usage penalty.
Semantics - The meaning of words
does matter. Bad input is not
"exceptional".
I think the difference depends on the contract of the particular class, i.e.
For code that is meant to deal with user input, and program defensively for it (i.e. sanitise it) it would be wrong to throw an exception for invalid input - it is expected.
For code that is meant to deal with already sanitised and validated input, which may have originated with the user, throwing an exception would be valid if you found some input that is meant to be forbidden. The calling code is violating the contract in that case, and it indicates a bug in the sanitising and/or calling code.
When using exceptions, the error handling code is separated from the code causing the error. This is the intent of exception handling - being an exceptional condition, the error can not be handled locally, so an exception is thrown to some higher (and unknown) scope. If not handled, the application will exit before any more hard is done.
If you ever, ever, ever throw exception when you are doing simple logic operations, like verifying user input, you are doing something very, very very, wrong.
The input is Not what was expected and
hence is exceptional.
This statement does not sit well with me at all. Either the UI constrains user input (eg, the use of a slider that bounds min/max values) and you can now assert certain conditions - no error handling required. Or, the user can enter rubbish and you expect this to happen and must handle it. One or the other - there is nothing exception going here whatsoever.
Throwing an exception allows me to
define exactly what was wrong like
StringValueTooLong or or
IntegerValueTooLow or InvalidDateValue
or whatever. Why is this considered
wrong?
I consider this beyond - closer to evil. You can define an abstract ErrorProvider interface, or return a complex object representing the error rather than a simple code. There are many, many options on how you retrieve error reports. Using exceptions because the are convenient is so, so wrong. I feel dirty just writing this paragraph.
Think of throwing an exception as hope. A last chance. A prayer. Validating user input should not lead to any of these conditions.
Is it possible that some of the disagreement is due to a lack of consensus about what 'user input' means? And indeed, at what layer you're coding.
If you're coding a GUI user interface, or a Web form handler, you might well expect invalid input, since it's come direct from the typing fingers of a human being.
If you're coding the model part of an MVC app, you may have engineered things so that the controller has sanitised inputs for you. Invalid input getting as far as the Model would indeed be an exception, and may be treated as such.
If you're coding a server at the protocol level, you might reasonably expect the client to be checking user input. Again, invalid input here would indeed be an exception. This is quite different from trusting the client 100% (that would be very stupid indeed) - but unlike direct user input, you predict that most of the time inputs would be OK. The lines blur here somewhat. The more likely it is that something happens, the less you want to use exceptions to handle it.
This is a linguistic pov( point of view) on the matter.
Why are Exceptions said to be so bad for Input Validation?
conclusion :
Exceptions are not defined clearly enough, so there are different opinions.
Wrong input is seen as a normal thing, not as an exception.
thoughts ?
It probably comes down to the expectations one takes about the code that is created.
the client can not be trusted
validation has to happen at the server's side.
stronger : every validation happens at server's side.
because validation happens at the server's side it is expected to be done there and what is expected is not an exception, since it is expected.
However,
the client's input can not to be trusted
the client's input-validation can be trusted
if validation is trusted it can be expected to produce valid input
now every input is expected to be valid
invalid input is now unexpected, an exception
.
exceptions can be a nice way to exit the code.
A thing mentioned to consider is if your code is left in a proper state.
I would not know what would leave my code in an improper state.
Connections get closed automatically, leftover variables are garbage-collected, what's the problem?
Another vote against exception handling for things that aren't exceptions!
In .NET the JIT compiler won't perform optimizations in certain cases even when exceptions aren't thrown. The following articles explain it well.
http://msmvps.com/blogs/peterritchie/archive/2007/06/22/performance-implications-of-try-catch-finally.aspx
http://msmvps.com/blogs/peterritchie/archive/2007/07/12/performance-implications-of-try-catch-finally-part-two.aspx
When an exception gets thrown it generates a whole bunch of information for the stack trace which may not be needed if you were actually "expecting" the exception as is often the case when converting strings to int's etc...
In general, libraries throw exceptions and clients catch them and do something intelligent with them. For user input I just write validation functions instead of throwing exceptions. Exceptions seem excessive for something like that.
There are performance issues with exceptions, but in GUI code you won't generally have to worry about them. So what if a validation takes an extra 100 ms to run? The user isn't going to notice that.
In some ways it's a tough call - On the one hand, you might not want to have your entire application come crashing down because the user entered an extra digit in a zip code text box and you forgot to handle the exception. On the other, a 'fail early, fail hard' approach ensures that bugs get discovered and fixed quickly and keeps your precious database sane. In general I think most frameworks recommend that you don't use exception handling for UI error checking and some, like .NET Windows Forms, provide nice ways to do this (ErrorProviders and Validation events) without exceptions.
Exceptions should not be used for input validation, because not only should exceptions be used in exceptional circumstances (which as it has been pointed out incorrect user entry is not) but they create exceptional code (not in the brilliant sense).
The problem with exceptions in most languages is they change the rules of program flow, this is fine in a truly exceptional circumstance where it is not necessarily possible to figure our what the valid flow should be and therefore just throw an exception and get out however where you know what the flow should be you should create that flow (in the case listed it would be to raise a message to the user telling them they need to reenter some information).
Exceptions were truly overused in an application I work on daily and even for the case where a user entered an incorrect password when logging in, which by your logic would be an exception result because it is not what the application wants. However when a process has one of two outcomes either correct or incorrect, I dont think we can say that, incorrect, no matter how wrong, is exceptional.
One of the major problems I have found with working with this code is trying to follow the logic of the code without getting deeply involved with the debugger. Although debuggers are great, it should be possible to add logic to what happens when a user enters an incorrect password without having to fire one up.
Keep exceptions for truly exceptional execution not just wrong. In the case I was highlighting getting your password wrong is not exceptional, but not being able to contact the domain server may be!
When I see exceptions being thrown for validation errors I often see that the method throwing the exception is performing lots of validations all at once. e.g.
public bool isValidDate(string date)
{
bool retVal = true;
//check for 4 digit year
throw new FourDigitYearRequiredException();
retVal = false;
//check for leap years
throw new NoFeb29InANonLeapYearException();
retVal = false;
return retVal;
}
This code tends to be pretty fragile and hard to maintain as the rules pile up over the months and years. I usually prefer to break up my validations into smaller methods that return bools. It makes it easier to tweak the rules.
public bool isValidDate(string date)
{
bool retVal = false;
retVal = doesDateContainAFourDigitYear(date);
retVal = isDateInALeapYear(date);
return retVal;
}
public bool isDateInALeapYear(string date){}
public bool doesDateContainAFourDigitYear(string date){}
As has been mentioned already, returning an error struct/object containing information about the error is a great idea. The most obvious advantage being that you can collect them up and display all of the error messages to the user at once instead of making them play Whack-A-Mole with the validation.
i used a combination of both a solution:
for each validation function, i pass a record that i fill with the validation status (an error code).
at the end of the function, if a validation error exists, i throw an exception, this way i do not throw an exception for each field, but only once. i also took advantage that throwing an exception will stop execution because i do not want the execution to continue when data is invalid.
for example
procedure Validate(var R:TValidationRecord);
begin
if Field1 is not valid then
begin
R.Field1ErrorCode=SomeErrorCode;
ErrorFlag := True;
end;
if Field2 is not valid then
begin
R.Field2ErrorCode=SomeErrorCode;
ErrorFlag := True;
end;
if Field3 is not valid then
begin
R.Field3ErrorCode=SomeErrorCode;
ErrorFlag := True;
end;
if ErrorFlag then
ThrowException
end;
if relying on boolean only, the developer using my function should take this into account writing:
if not Validate() then
DoNotContinue();
but he may forgot and only call Validate() (i know that he should not, but maybe he might).
so, in the code above i gained the two advantages:
1-only one exception in the validation function.
2-exception, even uncaught, will stop the execution, and appear at test time.
8 years later, and I'm running into the same dilemma trying to apply the CQS pattern. I'm on the side that input validation can throw an exception, but with an added constraint. If any input fails, you need to throw ONE type of exception: ValidationException, BrokenRuleException, etc. Don't throw a bunch of different types as it'll be impossible to handle them all. This way, you get a list of all the broken rules in one place. You create a single class that is responsible for doing validation (SRP) and throw an exception if at least 1 rule is broken. That way, you handle one situation with one catch and you know you are good. You can handle that scenario no matter what code is called. This leaves all the code downstream much cleaner as you know it is in a valid state or it wouldn't have gotten there.
To me, getting invalid data from a user is not something you would normally expect. (If every user sends invalid data to you the first time, I'd take a second look at your UI.) Any data that prevents you from processing the true intent whether it is user or sourced elsewhere needs to abort processing. How is it any different than throwing an ArgumentNullException from a single piece of data if it was user input vs. it being a field on a class that says This is required.
Sure, you could do validation first and write that same boilerplate code on every single "command", but I think that is a maintenance nightmare than catching invalid user input all in one place at the top that gets handled the same way regardless. (Less code!) The performance hit will only come if the user gives invalid data, which should not happen that often (or you have bad UI). Any and all rules on the client side have to be re-written on the server, anyway, so you could just write them once, do an AJAX call, and the < 500 ms delay will save you a ton of coding time (only 1 place to put all your validation logic).
Also, while you can do some neat validation with ASP.NET out of the box, if you want to re-use your validation logic in other UIs, you can't since it is baked into ASP.NET. You'd be better off creating something below and handling it above regardless of the UI being used. (My 2 cents, at least.)
I agree with Mitch that that "Exceptions should not be used for normal control flow". I just want to add that from what I remember from my computer science classes, catching exceptions is expensive. I've never really tried to do benchmarks, but it would be interesting to compare performance between say, an if/else vs try/catch.
One problem with using exceptions is a tendency to detect only one problem at a time. The user fixes that and resubmits, only to find another problem! An interface that returns a list of issues that need resolving is much friendlier (though it could be wrapped in an exception).