Fix common library functions, or abandon then? - language-agnostic

Imagine i have a function with a bug in it:
Pseudo-code:
void Foo(LPVOID o)
{
//implementation details omitted
}
The problem is the user passed null:
Object bar = null;
...
Foo(bar);
Then the function might crash due to a access violation; but it could also happen to work fine. The bug is that the function should have been checking for the invalid case of passing null, but it just never did. It was never issue because developers were trusted to know what they're doing.
If i now change the function to:
Pseudo-code:
void Foo(LPVOID o)
{
if (o == null) throw new EArgumentNullException("o");
//implementation details omitted
}
then people who were happily using the function, and happened to but not get an access violation, now suddenly will begin seeing an EArgumentNullException.
Do i continue to let people using the function improperly, and create a new version of the function? Or do i fix the function to include what it should have originally had?
So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one?
Consider a bug so common that Microsoft had to fix it for developers:
MessageBox(GetDesktopWindow, ...);
You never, ever, ever want to make a window model against the desktop. You'll lock up the system. Do you continue to let developers lock up the user's computer? Or do you change the function to:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
throw new Exception("hWndParent cannot be the desktop window. Use NULL instead.");
...
}
In reality Microsoft changed the Window Manager to auto-fix the bad parameter:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
hWndParent = 0;
...
}
In my made up example there is no way to patch the function - if i wasn't given an object, i can't do what i need to do on it.
Do you risk breaking existing code by adding parameter validation? Do you let existing code continue to be wrong, getting incorrect results?

The problem is that not only are you fixing a bug, but you are changing the semantic signature of the method by introducing an error case.
From a software engineering perspective I would advocate that you try to specify methods as best as possible (for instance using pre and post-conditions) but once the method is out there, specification changes are a no-go (or at least you would have to check all occurrences of the method) and a new method would be better.

I'd keep the old function and simply let it create a warning that notifies you of every (possibly) wrong use and then i'd just kick the developer who used it wrong until he uses it properly.
You cannot catch everything. What if someone wrote "MakeLocation("Ian Boyd", "is stupid");"? Would you create a new function or change the function to catch that? No, you would fire the developer (or at least punish him).
Of course this requires that you document what your function requires as input.

This is where having automated tests [Unit testing, Intergration Testing, automated functional testing] are great, they give you the power to change existing code with confidance.
When making changes like this I would suggest finding all usages and ensuring they are behaving how you belive they should.
I myself would make bug fixes to existing function rather them duplicating them 99% of the time. If it changes behavior alot and there are a lot of calls to this function you need to be very sure of your change.
So go ahead make your change, run your unit tests, then your automated functional tests. Fix any errors and your golden!

If your code has a bug in it you should do what you normally do when any bug is reported. One part of that is assessing the impacts of fixing and of not fixing it. Sometimes the right thing to do with a bug is to not fix it because the behaviour it exposes has become accepted. Sometimes the cost of fixing it, or the iconvenience of releasing a fix outside the normal release cycle, stops you releasing a fixed bug for a while. This isn't a moral dilemma, it's an economic question of costs and benefits. If you are disturbed at the thought of having known bugs in your published code, publish a known-bugs list.
One option none of the other respondents seems to have suggested is to wrap the buggy function in another function which imposes the new behaviour that you require. In the world where functions can run to many lines it is sometimes less likely to introduce new bugs to preserve a 99%-correct piece of code and address the change without modifying existing code. Of course, this is not always possible

Two choices:
Give the error checking version a new name and deprecate the old version (one version later have it start issuing warnings (compile time if possible, run time if necessary), two versions later remove it).
[not always possible] Place the newly introduced error check in such a way that it only triggers if the unmodified version would crash or produce undefined behavior. (So that users who were taking care in their code don't get any unpleasant surprises.)

It entirely depends on you, your codebase, and your users.
If you are Microsoft and you have a bug in your API that is used by millions of devs around the world, then you will probably want to just create a new function and update the docs on the old one. If you can, you would also want to update the compiler to give warnings as well. (Though even then you may be able to change the existing system; remember when MS switched VC to the C++ standard and you had to update all of your #include iostreams and add using stds to get simple, existing console apps working again?)
It basically depends on what the function is. If it is something basic that will have massive ripple effects, then it could break a lot of code. If it is just an ancillary function, then you may as well fix it. Of course if you are Microsoft and your other code depends on a bug in one of your functions, then you probably should fix it since that is just plain embarrassing to keep. If other devs rely on the bug (that you created), then you may have an obligation to the users to not break their code that you caused to be buggy.
If you are a small company or independent developer, then sure, go ahead and fix the function. If you only need to update yourself or a few people on the new usage then fixing it is the best solution, especially since it is not even a big deal because all it really requires is an added note to the docs for the function. eg do not pass NULL or an exception is thrown if hWnd is the desktop, etc.
Another option as a sort of compromise would be to create a wrapper function. You could create a small, inline function that checks the args and then calls the existing function. That way you don’t really have to do much in the short term and eventually when people have moved to the new one, you can deprecate or even remove the old one, moving the code to the new once between the checks.
In most scenarios, it is better to fix a buggy function, particularly if you are merely adding argument checks as opposed to completely changing the behavior of the function. It is not really a good idea to facilitate—read encourage—bad coding just because it would break some existing code (especially if the code is free!) Think about it: if someone is creating a new program, then they can do it right from the start instead of relying on a bug. If they are re-compiling an old program that depends on the bug, then they can just update the code. Again, it depends on how messy and convoluted the code is, how many people are affected, and whether or not they pay you, but it is quite common to have to update old code to for example initialize variables that hand’t been, or check for error codes, etc.
To sum up, in your specific example (given the information provided), you should just fix it.

Related

Reusing existing functions

When adding a new feature to an existing system if you come across an existing function that almost does what you need is it best practice to:
Copy the existing function and make your changes on the new copy (knowing that copying code makes your fellow devs cry).
-or-
Edit the existing function to handle both the existing case and your new case risking that you may introduce new bugs into existing parts of the system (which makes the QA team cry)
If you edit the existing function where do you draw the line before you should just create a new independent function (based on a copy)...10% of the function, 50% of the function?
You can't always do this, but one solution here would be to split your existing function in other tiny parts, allowing you to use the parts you need without editing all the code, and making it easier to edit small pieces of code.
That being said, if you think you can introduce new bugs into existing parts of the system without noticing it, you should probably think about using units tests.
Rule of thumb I tend to follow is that if I can cover the new behaviour by adding an extra parameter (or new valid value) to the existing function, while leaving the code more-or-less "obviously the same" in the existing case, then there's not much danger in changing a function.
For example, old code:
def utf8len(s):
return len(s.encode('utf8')) # or maybe something more memory-efficient
New use case - I'm writing some code in a style that uses the null object pattern, so I want utf8len(None) to return None instead of throwing an exception. I could define a new function utf8len_nullobjectpattern, but that's going to get quite annoying quite quickly, so:
def utf8len(s):
if s != None:
return len(s.encode('utf8')) # old code path is untouched
else:
return None # new code path introduced
Then even if the unit tests for utf8len were incomplete, I can bet that I haven't changed the behavior for any input other than None. I also need to check that nobody was ever relying on utf8len to throw an exception for a None input, which is a question of (1) quality of documentation and/or tests; and (2) whether people actually pay any attention to defined interfaces, or just Use The Source. If the latter, I need to look at calling sites, but if things are done well then I pretty much don't.
Whether the old allowed inputs are still treated "obviously the same" isn't really a question of what percentage of code is modified, it's how it's modified. I've picked a deliberately trivial example, since the whole of the old function body is visibly still there in the new function, but I think it's something that you know when you see it. Another example would making something that was fixed configurable (perhaps by passing a value, or a dependency that's used to get a value) with a default parameter that just provides the old fixed value. Every instance of the old fixed thing is replaced with (a call to) the new parameter, so it's reasonably easy to see on a diff what the change means. You have (or write) at least some tests to give confidence that you haven't broken the old inputs via some stupid typo, so you can go ahead even without total confidence in your test coverage.
Of course you want comprehensive testing, but you don't necessarily have it. There are also two competing maintenance imperatives here: 1 - don't duplicate code, since if it has bugs in it, or behavior that might need to change in future, then you're duplicating the bugs / current behavior. 2 - the open/closed principle, which is a bit high-falutin' but basically says, "write stuff that works and then don't touch it". 1 says that you should refactor to share code between these two similar operations, 2 says no, you've shipped the old one, either it's usable for this new thing or it isn't, and if it isn't then leave it alone.
You should always strive to avoid code duplication. Therefore I would suggest that you try to write a new function that modifies the return value of the already existing function to implement your new feature.
I do realize that in some cases it might not be possible to do that. In those cases you definitely should consider rewriting the existing function without changing its interface. And introducing new bugs by doing that should be prevented by unit tests that can be run on the modified function before you add it to the project code.
And if you need only part of the existing function, consider extracting a new function from the existing one and use this new "helper" function in the existing and in your new function. Again confirming everything is working as intended via unit tests.

Are fail-fast and fail-safe exception handling principles incompatible?

I'd like to understand better what is fail-fast and fail-safe.
What it seems to me at first glance is that fail-fast means that we want to make the system clearly fail when any unexpected thing happens.
I mean for exemple if a factory can't create an instance of object, for fail-fast principle, we really don't want the factory to return null, or empty object, or partially initialized object that could, by chance, be used correctly by the application -> most time we would have an unexpected behaviour, or an unexpected exception raised at another level that wouldn't permit us to know the real matter is in the factory.
It is what this principle means?
Fail safe principle is quite hard to understand for me.
The most common exemple in Java is about the collections, their iterators and the concurrent access.
It's said that a collection/iterator that permits modifying a list while iterating over it is called fail-safe. It's usually done by finally iterating over a copy of the initial list.
But in this exemple i don't really understand where the system fails... and thus while it's fail-safe... Where is the failure? We just iterate over a copy or not, depending on our needs...
I don't see any match with the wiki definition of fail-safe...
Thus in such articles like:
http://www.certpal.com/blogs/2009/09/iterators-fail-fast-vs-fail-safe/
They opposite fail-fast to fail-safe... what i just don't catch is why we call fail-safe this iteration over a copy...
I found another exemple here:
http://tutorials.jenkov.com/java-exception-handling/fail-safe-exception-handling.html
It seems a lot more related to initial definition of the fail-safe principle.
What i think of fail-safe is that when a system fails, we must ensure that the failure handler doesn't fail or, if it does, ensure that the real initial problem is not hidden by the failure of the handler. In the given exemple the handler is right near the initial failure code, but it's not always the case. Fail-safe means to me more something like we handle correctly the errors that could happen in the failure handlers or something like that...
Thus for me these 2 principles doesn't seem incompatible.
What do you think?
Can't a system fail fast & safely???
It is better to avoid failure in the first place (fail safe), but if this is not possible, it is best to fail fast (to fail as quickly as possible).
The two are not opposites, but complementary.
As you say - I like my code to be as fail safe as possible, but where it isn't, I want it to fail fast.
fail safe does not mean that something will not fail -- it means that when it fails, it fails in a safe way. Something that cannot fail is failure proof -- it that is possible at all.
A fail safe elevator jams at its present location if the cable breaks. The riders are inconveniently stuck, but conveniently not dead.
Consider the example of an iterator. The theory is that it is better to signal to the client code immediately that something is amiss, rather than to blindly return a valid-looking answer that may cause more serious problems down the line. It the client code is safety conscious, it has the opportunity to intervene and recover right away. So in this instance, fail safe and fail fast are compatible, the latter being a strategy to achieve the former.
On the other hand, consider a web browser in the hands of someone who is not comfortable with computers. They are trying to see what time their movie starts. Let's say that (heaven-forbid) the HTML on the page is not well-formed. If the renderer were to fail fast, it might decide to abandon rendering the information the user wants to see because a preceding <HR> tag is spelled <H>. In this case, it is better just blunder on, rendering the page as best as possible. The error might be insignificant and never caught, or it may be caught much, much later as someone finally notices that the page doesn't look quite right. So here is an example where fail fast is not a good strategy for fail safe.
If the web page were my online banking application, I'd surely want it to blow up spectacularly (with a rollback, of course) if the slightest thing goes wrong. So then fail fast once again becomes the strategy of choice for fail safe.
My point is that fail safe is a concept unto itself, and that fail fast may or may not be a particular technique that contributes to failure safety.

How strict should I be in the "do the simplest thing that could possible work" while doing TDD

For TDD you have to
Create a test that fail
Do the simplest thing that could possible work to pass the test
Add more variants of the test and repeat
Refactor when a pattern emerge
With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them.
For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException
So my first test was:
#Test
void testFormat(){
// empty doesn't do anything nor throw anything
processor.validate("empty.txt");
try {
processor.validate("invalid.txt");
assert false: "Should have thrown InvalidFormatException";
} catch( InvalidFormatException ife ) {
assert "Invalid format".equals( ife.getMessage() );
}
}
I run it and it fails because it doesn't throw an exception.
So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I :
public void validate( String fileName ) throws InvalidFormatException {
if(fileName.equals("invalid.txt") {
throw new InvalidFormatException("Invalid format");
}
}
Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times )
I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but:
Q: am I taking too literal the "Do the simplest thing..." stuff?
I think your approach is fine, if you're comfortable with it. You didn't waste time writing a silly case and solving it in a silly way - you wrote a serious test for real desired functionality and made it pass in - as you say - the simplest way that could possibly work. Now - and into the future, as you add more and more real functionality - you're ensuring that your code has the desired behavior of throwing the correct exception on one particular badly-formatted file. What comes next is to make that behavior real - and you can drive that by writing more tests. When it becomes simpler to write the correct code than to fake it again, that's when you'll write the correct code. That assessment varies among programmers - and of course some would decide that time is when the first failing test is written.
You're using very small steps, and that's the most comfortable approach for me and some other TDDers. If you're more comfortable with larger steps, that's fine, too - but know you can always fall back on a finer-grained process on those occasions when the big steps trip you up.
Of course your interpretation of the rule is too literal.
It should probably sound like "Do the simplest potentially useful thing..."
Also, I think that when writing implementation you should forget the body of the test which you are trying to satisfy. You should remember only the name of the test (which should tell you about what it tests). In this way you will be forced to write the code generic enough to be useful.
I too am a TDD newbie struggling with this question. While researching, I found this blog post by Roy Osherove that was the first and only concrete and tangible definition of "the simplest thing that could possibly work" that I have found (and even Roy admitted it was just a start).
In a nutshell, Roy says:
Look at the code you just wrote in your production code and ask yourself the following:
“Can I implement the same solution in a way that is ..”
“.. More hard-coded ..”
“.. Closer to the beginning of the method I wrote it in.. “
“.. Less indented (in as less “scopes” as possible like ifs, loops, try-catch) ..”
“.. shorter (literally less characters to write) yet still readable ..”
“… and still make all the tests pass?”
If the answer to one of these is “yes” then do that, and see all the tests still passing.
Lots of comments:
If validation of "empty.txt" throws an exception, you don't catch it.
Don't Repeat Yourself. You should have a single test function that decides if validation does or does not throw the exception. Then call that function twice, with two different expected results.
I don't see any signs of a unit-testing framework. Maybe I'm missing them? But just using assert won't scale to larger systems. When you get a result from validation, you should have a way to announce to a testing framework that a given test, with a given name, succeeded or failed.
I'm alarmed at the idea that checking a file name (as opposed to contents) constitutes "validation". In my mind, that's a little too simple.
Regarding your basic question, I think you would benefit from a broader idea of what the simplest thing is. I'm also not a fundamentalist TDDer, and I'd be fine with allowing you to "think ahead" a little bit. This means thinking ahead to this afternoon or tomorrow morning, not thinking ahead to next week.
You missed point #0 in your list: know what to do. You say you are processing a file for validation purposes. Once you have specified what "validation" means (hint: do this before writing any code), you might have a better idea of how to a) write tests that, well, test the specification as implemented, and b) write the simplest thing.
If, e.g., validation is "must be XML", your test case is just some non-xml-conformant string, and your implementation is using an XML library and (if necessary) transform its exceptions into those specified for your "validation" feature.
One thing of note to future TDD learners - the TDD mantra doesn't actually include "Do the simplest thing that could possibly work." Kent Beck's TDD Book has only 3 steps:
Red— Write a little test that doesn't work, and perhaps doesn't even
compile at first.
Green— Make the test work quickly, committing
whatever sins necessary in the process.
Refactor— Eliminate all of the duplication created in merely getting the test to work.
Although the phrase "Do the simplest thing..." is often attributed to Ward Cunningham, he actually asked a question "What's the simplest thing that could possibly work?", but that question was later turned into a command - which Ward believes may confuse rather help.
Edit: I can't recommend reading Beck's TDD Book strongly enough - it's like having a pairing session with the master himself, giving you his insights and thoughts on the Test Driven Development process.
Like a method should do one thing only, one test should test one thing (behavior) only. To address the example given, I'd write two tests, for instance, test_no_exception_for_empty_file and test_exception_for_invalid_file. The second could indeed be several tests - one per sort of invalidity.
The third step of the TDD process shall be interpreted as "add a new variant of the test", not "add a new variant to the test". Indeed, a unit test shall be atomic (test one thing only) and generally follows the triple A pattern: Arrange - Act - Assert. And it's very important to verify the test fails first, to ensure it is really testing something.
I would also separate the responsibility of reading the file and validating its content. That way, the test can pass a buffer to the validate() function, and the tests do not have to read files. Usually unit tests do not access to the filesystem cause this slow them down.

Is it a good practice to suppress warnings?

Sometimes while writing Java in Eclipse, I write code that generates warnings. A common one is this, which I get when extending the Exception class:
public class NumberDivideException extends Exception {
public NumberDivideException() {
super("Illegal complex number operation!");
}
public NumberDivideException(String s) {
super(s);
}
} // end NumberDivideException
The warning:
The serializable class NumberDivideException does not declare a static final serialVersionUID field of type long.
I know this warning is caused by my failure to... well, it says right above. I could solve it by including the serialVersionUID, but this is a one hour tiny assignment for school; I don't plan on serializing it anytime soon...
The other option, of course, is to let Eclipse add #SuppressWarnings("serial").
But every time my mouse hovers over the Suppress option, I feel a little guilty.
For programming in general, is it a good habit to suppress warnings?
(Also, as a side question, is adding a "generated" serialVersionUID like serialVersionUID = -1049317663306637382L; the proper way to add a serialVersionUID, or do I have to determine the number some other way?)
EDIT: Upon seeing answers, it appears my question may be slightly argumentative... Sorry!
Too late for me to delete though...
In the particular case of eclipse, rather than suppress warnings as they happen, I prefer setting up eclipse to emit warnings I care about and automatically ignore all instances of the ones I don't. See Windows ->
Preferences -> Java -> Compiler -> Error/Warnings
That's rather specific to Java though, and I find that Java tends to have many more warnings I don't care about than most other languages. In other languages, I usually have all warnings on and fix them as they come up
It's very satisfying to have code compile with out warnings - and when you do get one it then stands out and may alert you to a problem in the code
You shall supress warnings ONLY if you are absolutely sure of what you are doing
and it whould be better to have this kind of thing documented for future alterations (java doc comments for example)
this way you hide the warnings you know won´t cause problems and can focus on the ones that will cause you problens
If it's a program you're ever going to look at again, it will pay off to do things 'the right way' -- and that means not suppressing warnings, but fixing them.
For school assignments however, you are often asked to reinvent the wheel/complete trivial tasks, and so you shouldn't feel any qualms about "hacking" stuff together. Unless of course you are graded on coding style...
You should never suppress warnings globally even just one particular kind of warning. You should also set your compiler to be as picky as possible. Warnings are there to tell you about problems that may exist. You can either refactor your code to get rid of them or, in some languages, add some sort of directive to ignore a specific bit of code that causes a specific warning. This will allow you to review a warning and ignore it if you know it is OK.
Wherever practical, suppress warnings only on a particular line, or - at most - a particular file.
Not only does this ensure that warnings that you WEREN'T expecting get through to you, but it serves as an explicit design note - pointing out to the next person editing the code (or yourself in a month's time) that you were aware of the issue, and took a deliberate decision that it was acceptable for this code. That saves the next person from having to evaluate the situation again.
(I don't know enough about Eclipse to make this happen - this is a general principle that applies across languages.)

Should code prevent a logically invalid call even when no harm would be done?

This one has been puzzling my for some time now.
Let's imagine a class which represents a resource, and in order to be able to use this resource one needs to first call the 'Open' method on it, or an InvalidOperationException will be thrown.
Should my code also check whether someone tries to open an already open resource, or close an already closed one?
Should code prevent a logically invalid invocation even when no harm would be done?
I think that programming this way would help writing better code at the other side, but I feel that I might be taking too much responsibility and affect reusability.
What do you guys think?
Edit:
I don't think this could be called defensive programming because it won't let a possible bad use to slip either, and another InvalidOperationException will be thrown.
This is called defensive programming. That's a good programming practice because you ensure that your application doesn't crash on misbehaviour.
That some method should be called first before another method is called, is not a good programming practice. It add's a lot of complexity, which is better handled by the class itself.
This is called sequential coupling. This wikipedia article says that it depends on the context if it's a bad practice, but it shouldn't crash when handled improperly. Sometimes it's necessary to throw an exception to make things clear.
This really depends on what the class actually does. In some cases failing silently is a good idea (eg, you want your DVD player to continue working, not show an error message if it opens the DVD tray that is already open) and in other cases you want as much information as possible (eg, if an airplane tries to close a door that is reportedly already closed, then something is wrong and the pilot should be alerted).
In most cases throwing an error when a logically invalid action is performed is useful for developers, so implementing those exceptions depends on who will use the code. If it is used internally for one application, then it's not vital. But if it is used by many different projects or developers, then I would look into it.
If your example is really the case, then the Open functionality should probably be invoked by the class's constructor.
If you consider the C++ iostream library (which is very widely used and considered quite a good example) you can call any operation on a stream class, whether it is open or not. The called function will simply return a failure indicator of some sort if the operation could not be performed. The functions must of course test the stream state in order to do this.
What you must not do is allow your programs to silently accept any old input as parameters. For example, this would be a broken implementation of strlen()
int strlen( const char * s )
{
if ( s == 0 )
{
return 0; // bad
}
else
{
// calculate length not shown
}
}
as it fields bad inputs without causing a fuss - it should instead throw an exception or use an assert(), depending on your exact development philosophy.
There's no substitute for taste, talent and experience in figuring out exactly how many safety checks should be in your code for best cost/benefit ratio for your organization.
A good quality APIs are expected to be fool-proof, and to guide the user with proper amount of warnings.
Sometimes, safety precautions may impair performance. Performance is one of the most counter-intuitive things in programming. Optimize with care, only when performance really matters.
If this is part of a public SDK that you're releasing to the wild, then the exposed API calls should have strong validation. It will help your 'users' (who are developers) and ensure you aren't stuck supporting usage you never intended to support.
Otherwise, I would not add such checks. I think they make the code harder to read, and these checks are rarely tested. In the past I would add a lot of code like this to make sure my code doesn't do the wrong thing. Now I write unit tests to verify my code does the right thing. The difference? I think tests are more maintainable, more readable, and they don't clutter your production code.
In the case of opening a file that is already open, it depends on knowing the effect of the request, will it reset the current read location for example.
In the case of closing a file that is already closed, think of it as a request for the file to be put in a known state. The code doesn't have to do anything but the desired state is acheived so the code can return a success condition. This is not true if there is some sort of file buffering that needs to be taken care of or maybe an interlinked resource to coordinate, like a modem/serial port or a printer/spooler.
Step back and think of the problem in terms of the desired outcome including any side-effects.
We once put a 'logout' link on an app menu that was displayed regardless of your login status. Why? Because it only took a simple (and very short) method to handle returning you to the login screen from the login screen and saved a large number of checks to handled tracking the login status just so the 'logout' menu-item was displayed only when you were logged in.
logical invalid invocations should always be reported to the user in debug mode..
When compiled in release mode, your code should not throw any unneeded exceptions or do anything else which could endanger the whole application.
Personally i prefer having some kind of logfile, and logging such logically invalid invocations surely will do no harm (at least when performance is not important)