Why use a "do while" loop? [closed] - language-agnostic

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've never understood why using a do while loops is necessary. I understand what they do, Which is to execute the code that the while loop contains without checking if the condition is true first.
But isn't the below code:
do{
document.write("ok");
}
while(x == "10");
The exact same as:
document.write("ok");
while(x == "10"){
document.write("ok");
}
Maybe I'm being very stupid and missing something obvious out but I don't see the benefit of using do while over my above example.

As you see you had to repeat the same line in the second example.
When you maintain, you most likely want those two lines to be the same, but you have repeated yourself.
Now Imagine that was a big block and not a line, not only will it take unnecessary precious visual space, but it will also be harder to maintain and attract inconsistencies.

Your example code is wrong:
do{
document.write("ok");
}while(x == "10"){
document.write("ok");
}
This would be the actual form:
do {
document.write("ok");
} while(x == "10");
You are correct that it executes the inner code block before checking the condition, but it doesn't need to duplicate the inner code block the way you have it. The do/while construct is (as you've already stated) a way to make sure that a piece of code is executed 1 or more times instead of 0 or more times (depending on the conditional).

What about:
do{
//.....
// 100000 lines of code!
//.....
} while(i%10);
Of course you will not write that:
//.....
// 100000 lines of code!
//.....
while(i%10){
//.....
// 100000 lines of code!
//.....
}
And you will then be forced to use a do-while loop
God Bless it!!
Edit:
Or you will use procedures..

do while will run at least once - letting you do, while... where as while requires the condition met to start at all..
var i = 6;
do{
alert(i);
}while( i < 5)
// alerts 6, but while criteria isn't met so it stops
var i = 6;
while( i < 5) {
alert(i);
}
// no output - while condition wasn't met

Its a better way of writing code:
int count = ReadDataFromStream();
while(count != 0)
{
count = ReadDataFromStream();
}
Can be written using do-while as:
int count = 0;
do
{
count = ReadDataFromStream();
} while(count != 0);
There are better examples of do-while but I could not recall at this time.

The difference is most important when you have more than one line of code to be executed at least once. Simulating a do loop with a while loop would result in substantial code duplication, which is always bad. Granted, a better solution is to refactor that code into a method and then simply have the same method call before and in the loop, but many people dislike even that much duplication, and using the do keyword declares unambiguously "this is a foot-controlled loop".
So, basically, readability and basic SE principles.

Although
do {
s1;
...
sn;
} while (p);
could be written as
boolean done = false;
while (!done) {
s1;
...
sn;
done = !p;
}
the former is more comprehensible.

What you're missing is that your use of do...while above is 100% WRONG!
A do...while loop in Java, for example, looks something like this:
do {
//something you want to execute at least once
} while (someBooleanCondition);
See that complete lack of a second block there? See how the while ends the statement completely? So what happens now is that the code in the {...} pair between do and while will get executed, then someBooleanCondition will get tested and if it's true, the code block will get executed again. And again. And again. Until someBooleanCondition gets tested false.
And at this point, perhaps, you'll understand why we have both forms of loop. The above code could be translated to:
//something you want to execute at least once
while (someBooleanCondition) {
//something you want to execute at least once
}
This code, however, requires you to type the same (potentially large and complicated) code twice, leaving the door open for more errors. Hence the need for a bottom-test loop instead of a top-test loop.

First of all your do..while syntax is wrong. It is like this:
do
{
document.write("ok");
}while(x=="10");
It is useful when you want to execute the body of the loop at least once without evaluating its teminating condition. For example, lets say you want to write a loop where you are prompting the user for input and depending on input execute some code. You would want this to execute at least once and then ask the user whether he wants continue. In such cases, do..while loop results in lesser and cleaner code compared to a while loop.

Related

Good Coding Practices: When to Create New Functions

I have a certain function that uses the same (few, 2-5 depending on how I may change it to accommodate possible future uses) lines of code 4 times.
I looked at this question, but it's not specific enough for me, and doesn't match the direction I'm going for.
Here's some pseudo:
function myFunction() {
if (something) {
// Code line 1
// Code line 2
// Code line 3
}
else if (somethingElse) {
// Code line 1
// Code line 2
// Code line 3
}
else if (anotherThing) {
// Code line 1
// Code line 2
// Code line 3
}
else if (theLastThing) {
// Code line 1
// Code line 2
// Code line 3
}
else {
// Not previously used code
}
}
Those same 3 lines of code are copy/pasted (constructing the same object if any of these conditions are met). Is it a good practice to create a function that I can pass all this information to and return the necessary information when it's finished? All of these conditional statements are inside a loop that could run up to 1000 or so times.
I'm not sure if the cost of preparing the stack frame(?) by jumping into another function is more costly over 1000 iterations to be worth having ~15 lines of duplicated code. Obviously function-alizing it would make it more readable, however this is very specific functionality that is not used anywhere else. The function I could write to eliminate the copy/paste mentality would be something like:
function myHelperFunction(someParameter, someOtherParameter) {
// Code line 1
// Code line 2
// Code line 3
return usefulInformation;
}
And then call the function in all those conditional statements as 1 line per conditional statement:
myHelperFunction(myPassedParameter, myOtherPassedParameter);
Essentially turning those 12 lines into 4.
So the question - is this a good practice in general, to create a new function for a very small amount of code to save some space and readability? Or is the cost for jumping functions too impacting to be worth it? Should one always create a new function for any code that they might copy/paste in the future?
PS - I understand that if this bit of code were to be used in different (Classes) or source files that it would be logical to turn it into a function to avoid needing to find all the locations where it was copy/pasted in order to make changes. But I'm talking more or less single-file/single-Class or in-function kind of a dilemma.
Also, feel free to fix my tags/title if I didn't do it correctly. I'm not really sure how to title/tag this post correctly.
The answer to any optimization question that isn't also an algorithms/data structures question is: Profile your code! Only optimize things that show up as problem areas.
Which means you should find out if function call overhead is actually a performance problem in the specific program you're writing. If it is, inline the code. If it isn't, don't. Simple as that.
You're approaching this the wrong way, in my opinion. In the first place, you shouldn't be using multiple (else)ifs that all execute the same code; use one with a compound or precomputed (in this case I recommend precomputed due to all the possible subconditions) condition. Something like this will probably make maintaining the code a lot easier.
function myFunction() {
bool condition = something ||
somethingElse ||
anotherThing ||
theLastThing;
if (condition) {
// Code line 1
// Code line 2
// Code line 3
}
else {
// Not previously used code
}
}
Yes create a function, in general you should follow the DRY principal. Don't Repeat Yourself.
http://en.wikipedia.org/wiki/Don%27t_repeat_yourself
Your stack operations are going to be minimal for something like this. See Imre Kerr's comment on your question.
It's not just for readability. So many reasons. Maintainability is huge. If this code has to change, it will be a pain for someone else to come along and try to figure out every place to change it. It's a lot better to only have to change code in one place.
I don't know if this apply to the example that you provided, but factoring code is not the only reason to write a function, you can also think in term of tests
A function provides a programming unit that can be tested separately.
So it may happen that you decompose a complex operation into several simpler/more elementary units, even if those functions are only called once.
Since you asked the question for a few lines of code, you could ask yourself:
can I reasonnably name this function?( justDoThis should be OK, doThisAndThatAndThenAnotherThing less so)
does it have a reasonnable number of parameters?(I would say two or three)
is it worth testing it as a separate unit?(does it simplify overall testing)
is the code more readable/understandable with such function call or not?(if answer to first two questions is no, it's not necessarily obvious)
This is a wonderful question, and the answer is: It depends.
Personally I would create a function for increased code readability, but If you are looking for efficiency maybe you would want to leave the code copied and pasted.

Should I avoid do/while and favour while? [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When I was taking CS in college (mid 80's), one of the ideas that was constantly repeated was to always write loops which test at the top (while...) rather than at the bottom (do ... while) of the loop. These notions were often backed up with references to studies which showed that loops which tested at the top were statistically much more likely to be correct than their bottom-testing counterparts.
As a result, I almost always write loops which test at the top. I don't do it if it introduces extra complexity in the code, but that case seems rare. I notice that some programmers tend to almost exclusively write loops that test at the bottom. When I see constructs like:
if (condition)
{
do
{
...
} while (same condition);
}
or the inverse (if inside the while), it makes me wonder if they actually wrote it that way or if they added the if statement when they realized the loop didn't handle the null case.
I've done some googling, but haven't been able to find any literature on this subject. How do you guys (and gals) write your loops?
I always follow the rule that if it should run zero or more times, test at the beginning, if it must run once or more, test at the end. I do not see any logical reason to use the code you listed in your example. It only adds complexity.
Use while loops when you want to test a condition before the first iteration of the loop.
Use do-while loops when you want to test a condition after running the first iteration of the loop.
For example, if you find yourself doing something like either of these snippets:
func();
while (condition) {
func();
}
//or:
while (true){
func();
if (!condition) break;
}
You should rewrite it as:
do{
func();
} while(condition);
Difference is that the do loop executes "do something" once and then checks the condition to see if it should repeat the "do something" while the while loop checks the condition before doing anything
Does avoiding do/while really help make my code more readable?
No.
If it makes more sense to use a do/while loop, then do so. If you need to execute the body of a loop once before testing the condition, then a do/while loop is probably the most straightforward implementation.
First one may not execute at all if condition is false. Other one will execute at least once, then check the conidition.
For the sake of readability it seems sensible to test at the top. The fact it is a loop is important; the person reading the code should be aware of the loop conditions before trying to comprehend the body of the loop.
Here's a good real-world example I came across recently. Suppose you have a number of processing tasks (like processing elements in an array) and you wish to split the work between one thread per CPU core present. There must be at least one core to be running the current code! So you can use a do... while something like:
do {
get_tasks_for_core();
launch_thread();
} while (cores_remaining());
It's almost negligable, but it might be worth considering the performance benefit: it could equally be written as a standard while loop, but that would always make an unnecessary initial comparison that would always evaluate true - and on single-core, the do-while condition branches more predictably (always false, versus alternating true/false for a standard while).
Yaa..its true.. do while will run atleast one time.
Thats the only difference. Nothing else to debate on this
The first tests the condition before performing so it's possible your code won't ever enter the code underneath. The second will perform the code within before testing the condition.
The while loop will check "condition" first; if it's false, it will never "do something." But the do...while loop will "do something" first, then check "condition".
Yes, just like using for instead of while, or foreach instead of for improves readability. That said some circumstances need do while and I agree you would be silly to force those situations into a while loop.
It's more helpful to think in terms of common usage. The vast majority of while loops work quite naturally with while, even if they could be made to work with do...while, so basically you should use it when the difference doesn't matter. I would thus use do...while for the rare scenarios where it provides a noticeable improvement in readability.
The use cases are different for the two. This isn't a "best practices" question.
If you want a loop to execute based on the condition exclusively than use
for or while
If you want to do something once regardless of the the condition and then continue doing it based the condition evaluation.
do..while
For anyone who can't think of a reason to have a one-or-more times loop:
try {
someOperation();
} catch (Exception e) {
do {
if (e instanceof ExceptionIHandleInAWierdWay) {
HandleWierdException((ExceptionIHandleInAWierdWay)e);
}
} while ((e = e.getInnerException())!= null);
}
The same could be used for any sort of hierarchical structure.
in class Node:
public Node findSelfOrParentWithText(string text) {
Node node = this;
do {
if(node.containsText(text)) {
break;
}
} while((node = node.getParent()) != null);
return node;
}
A while() checks the condition before each execution of the loop body and a do...while() checks the condition after each execution of the loop body.
Thus, **do...while()**s will always execute the loop body at least once.
Functionally, a while() is equivalent to
startOfLoop:
if (!condition)
goto endOfLoop;
//loop body goes here
goto startOfLoop;
endOfLoop:
and a do...while() is equivalent to
startOfLoop:
//loop body
//goes here
if (condition)
goto startOfLoop;
Note that the implementation is probably more efficient than this. However, a do...while() does involve one less comparison than a while() so it is slightly faster. Use a do...while() if:
you know that the condition will always be true the first time around, or
you want the loop to execute once even if the condition is false to begin with.
Here is the translation:
do { y; } while(x);
Same as
{ y; } while(x) { y; }
Note the extra set of braces are for the case you have variable definitions in y. The scope of those must be kept local like in the do-loop case. So, a do-while loop just executes its body at least once. Apart from that, the two loops are identical. So if we apply this rule to your code
do {
// do something
} while (condition is true);
The corresponding while loop for your do-loop looks like
{
// do something
}
while (condition is true) {
// do something
}
Yes, you see the corresponding while for your do loop differs from your while :)
As noted by Piemasons, the difference is whether the loop executes once before doing the test, or if the test is done first so that the body of the loop might never execute.
The key question is which makes sense for your application.
To take two simple examples:
Say you're looping through the elements of an array. If the array has no elements, you don't want to process number one of zero. So you should use WHILE.
You want to display a message, accept a response, and if the response is invalid, ask again until you get a valid response. So you always want to ask once. You can't test if the response is valid until you get a response, so you have to go through the body of the loop once before you can test the condition. You should use DO/WHILE.
I tend to prefer do-while loops, myself. If the condition will always be true at the start of the loop, I prefer to test it at the end. To my eye, the whole point of testing conditions (other than assertions) is that one doesn't know the result of the test. If I see a while loop with the condition test at the top, my inclination is to consider the case that the loop executes zero times. If that can never happen, why not code in a way that clearly shows that?
It's actually meant for a different things. In C, you can use do - while construct to achieve both scenario (runs at least once and runs while true). But PASCAL has repeat - until and while for each scenario, and if I remember correctly, ADA has another construct that lets you quit in the middle, but of course that's not what you're asking.
My answer to your question : I like my loop with testing on top.
Both conventions are correct if you know how to write the code correctly :)
Usually the use of second convention ( do {} while() ) is meant to avoid have a duplicated statement outside the loop. Consider the following (over simplified) example:
a++;
while (a < n) {
a++;
}
can be written more concisely using
do {
a++;
} while (a < n)
Of course, this particular example can be written in an even more concise way as (assuming C syntax)
while (++a < n) {}
But I think you can see the point here.
while( someConditionMayBeFalse ){
// this will never run...
}
// then the alternative
do{
// this will run once even if the condition is false
while( someConditionMayBeFalse );
The difference is obvious and allows you to have code run and then evaluate the result to see if you have to "Do it again" and the other method of while allows you to have a block of script ignored if the conditional is not met.
I write mine pretty much exclusively testing at the top. It's less code, so for me at least, it's less potential to screw something up (e.g., copy-pasting the condition makes two places you always have to update it)
It really depends there are situations when you want to test at the top, others when you want to test at the bottom, and still others when you want to test in the middle.
However the example given seems absurd. If you are going to test at the top, don't use an if statement and test at the bottom, just use a while statement, that's what it is made for.
You should first think of the test as part of the loop code. If the test logically belongs at the start of the loop processing, then it's a top-of-the-loop test. If the test logically belongs at the end of the loop (i.e. it decides if the loop should continue to run), then it's probably a bottom-of-the-loop test.
You will have to do something fancy if the test logically belongs in them middle. :-)
I guess some people test at the bottom because you could save one or a few machine cycles by doing that 30 years ago.
To write code that is correct, one basically needs to perform a mental, perhaps informal proof of correctness.
To prove a loop correct, the standard way is to choose a loop invariant, and an induction proof. But skip the complicated words: what you do, informally, is figure out something that is true of each iteration of the loop, and that when the loop is done, what you wanted accomplished is now true. The loop invariant is false at the end, for the loop to terminate.
If the loop conditions map fairly easily to the invariant, and the invariant is at the top of the loop, and one infers that the invariant is true at the next iteration of the loop by working through the code of the loop, then it is easy to figure out that the loop is correct.
However, if the invariant is at the bottom of the loop, then unless you have an assertion just prior to the loop (a good practice) then it becomes more difficult because you have to essentially infer what that invariant should be, and that any code that ran before the loop makes the loop invariant true (since there is no loop precondition, code will execute in the loop). It just becomes that more difficult to prove correct, even if it is an informal in-your-head proof.
This isn't really an answer but a reiteration of something one of my lecturers said and it interested me at the time.
The two types of loop while..do and do..while are actually instances of a third more generic loop, which has the test somewhere in the middle.
begin loop
<Code block A>
loop condition
<Code block B>
end loop
Code block A is executed at least once and B is executed zero or more times, but isn't run on the very last (failing) iteration. a while loop is when code block a is empty and a do..while is when code block b is empty. But if you're writing a compiler, you might be interested in generalizing both cases to a loop like this.
In a typical Discrete Structures class in computer science, it's an easy proof that there is an equivalence mapping between the two.
Stylistically, I prefer while (easy-expr) { } when easy-expr is known up front and ready to go, and the loop doesn't have a lot of repeated overhead/initialization. I prefer do { } while (somewhat-less-easy-expr); when there is more repeated overhead and the condition may not be quite so simple to set up ahead of time. If I write an infinite loop, I always use while (true) { }. I can't explain why, but I just don't like writing for (;;) { }.
I would say it is bad practice to write if..do..while loops, for the simple reason that this increases the size of the code and causes code duplications. Code duplications are error prone and should be avoided, as any change to one part must be performed on the duplicate as well, which isn't always the case. Also, bigger code means a harder time on the cpu cache. Finally, it handles null cases, and solves head aches.
Only when the first loop is fundamentally different should one use do..while, say, if the code that makes you pass the loop condition (like initialization) is performed in the loop. Otherwise, if it certain that loop will never fall on the first iteration, then yes, a do..while is appropriate.
From my limited knowledge of code generation I think it may be a good idea to write bottom test loops since they enable the compiler to perform loop optimizations better. For bottom test loops it is guaranteed that the loop executes at least once. This means loop invariant code "dominates" the exit node. And thus can be safely moved just before the loop starts.

Test loops at the top or bottom? (while vs. do while) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When I was taking CS in college (mid 80's), one of the ideas that was constantly repeated was to always write loops which test at the top (while...) rather than at the bottom (do ... while) of the loop. These notions were often backed up with references to studies which showed that loops which tested at the top were statistically much more likely to be correct than their bottom-testing counterparts.
As a result, I almost always write loops which test at the top. I don't do it if it introduces extra complexity in the code, but that case seems rare. I notice that some programmers tend to almost exclusively write loops that test at the bottom. When I see constructs like:
if (condition)
{
do
{
...
} while (same condition);
}
or the inverse (if inside the while), it makes me wonder if they actually wrote it that way or if they added the if statement when they realized the loop didn't handle the null case.
I've done some googling, but haven't been able to find any literature on this subject. How do you guys (and gals) write your loops?
I always follow the rule that if it should run zero or more times, test at the beginning, if it must run once or more, test at the end. I do not see any logical reason to use the code you listed in your example. It only adds complexity.
Use while loops when you want to test a condition before the first iteration of the loop.
Use do-while loops when you want to test a condition after running the first iteration of the loop.
For example, if you find yourself doing something like either of these snippets:
func();
while (condition) {
func();
}
//or:
while (true){
func();
if (!condition) break;
}
You should rewrite it as:
do{
func();
} while(condition);
Difference is that the do loop executes "do something" once and then checks the condition to see if it should repeat the "do something" while the while loop checks the condition before doing anything
Does avoiding do/while really help make my code more readable?
No.
If it makes more sense to use a do/while loop, then do so. If you need to execute the body of a loop once before testing the condition, then a do/while loop is probably the most straightforward implementation.
First one may not execute at all if condition is false. Other one will execute at least once, then check the conidition.
For the sake of readability it seems sensible to test at the top. The fact it is a loop is important; the person reading the code should be aware of the loop conditions before trying to comprehend the body of the loop.
Here's a good real-world example I came across recently. Suppose you have a number of processing tasks (like processing elements in an array) and you wish to split the work between one thread per CPU core present. There must be at least one core to be running the current code! So you can use a do... while something like:
do {
get_tasks_for_core();
launch_thread();
} while (cores_remaining());
It's almost negligable, but it might be worth considering the performance benefit: it could equally be written as a standard while loop, but that would always make an unnecessary initial comparison that would always evaluate true - and on single-core, the do-while condition branches more predictably (always false, versus alternating true/false for a standard while).
Yaa..its true.. do while will run atleast one time.
Thats the only difference. Nothing else to debate on this
The first tests the condition before performing so it's possible your code won't ever enter the code underneath. The second will perform the code within before testing the condition.
The while loop will check "condition" first; if it's false, it will never "do something." But the do...while loop will "do something" first, then check "condition".
Yes, just like using for instead of while, or foreach instead of for improves readability. That said some circumstances need do while and I agree you would be silly to force those situations into a while loop.
It's more helpful to think in terms of common usage. The vast majority of while loops work quite naturally with while, even if they could be made to work with do...while, so basically you should use it when the difference doesn't matter. I would thus use do...while for the rare scenarios where it provides a noticeable improvement in readability.
The use cases are different for the two. This isn't a "best practices" question.
If you want a loop to execute based on the condition exclusively than use
for or while
If you want to do something once regardless of the the condition and then continue doing it based the condition evaluation.
do..while
For anyone who can't think of a reason to have a one-or-more times loop:
try {
someOperation();
} catch (Exception e) {
do {
if (e instanceof ExceptionIHandleInAWierdWay) {
HandleWierdException((ExceptionIHandleInAWierdWay)e);
}
} while ((e = e.getInnerException())!= null);
}
The same could be used for any sort of hierarchical structure.
in class Node:
public Node findSelfOrParentWithText(string text) {
Node node = this;
do {
if(node.containsText(text)) {
break;
}
} while((node = node.getParent()) != null);
return node;
}
A while() checks the condition before each execution of the loop body and a do...while() checks the condition after each execution of the loop body.
Thus, **do...while()**s will always execute the loop body at least once.
Functionally, a while() is equivalent to
startOfLoop:
if (!condition)
goto endOfLoop;
//loop body goes here
goto startOfLoop;
endOfLoop:
and a do...while() is equivalent to
startOfLoop:
//loop body
//goes here
if (condition)
goto startOfLoop;
Note that the implementation is probably more efficient than this. However, a do...while() does involve one less comparison than a while() so it is slightly faster. Use a do...while() if:
you know that the condition will always be true the first time around, or
you want the loop to execute once even if the condition is false to begin with.
Here is the translation:
do { y; } while(x);
Same as
{ y; } while(x) { y; }
Note the extra set of braces are for the case you have variable definitions in y. The scope of those must be kept local like in the do-loop case. So, a do-while loop just executes its body at least once. Apart from that, the two loops are identical. So if we apply this rule to your code
do {
// do something
} while (condition is true);
The corresponding while loop for your do-loop looks like
{
// do something
}
while (condition is true) {
// do something
}
Yes, you see the corresponding while for your do loop differs from your while :)
As noted by Piemasons, the difference is whether the loop executes once before doing the test, or if the test is done first so that the body of the loop might never execute.
The key question is which makes sense for your application.
To take two simple examples:
Say you're looping through the elements of an array. If the array has no elements, you don't want to process number one of zero. So you should use WHILE.
You want to display a message, accept a response, and if the response is invalid, ask again until you get a valid response. So you always want to ask once. You can't test if the response is valid until you get a response, so you have to go through the body of the loop once before you can test the condition. You should use DO/WHILE.
I tend to prefer do-while loops, myself. If the condition will always be true at the start of the loop, I prefer to test it at the end. To my eye, the whole point of testing conditions (other than assertions) is that one doesn't know the result of the test. If I see a while loop with the condition test at the top, my inclination is to consider the case that the loop executes zero times. If that can never happen, why not code in a way that clearly shows that?
It's actually meant for a different things. In C, you can use do - while construct to achieve both scenario (runs at least once and runs while true). But PASCAL has repeat - until and while for each scenario, and if I remember correctly, ADA has another construct that lets you quit in the middle, but of course that's not what you're asking.
My answer to your question : I like my loop with testing on top.
Both conventions are correct if you know how to write the code correctly :)
Usually the use of second convention ( do {} while() ) is meant to avoid have a duplicated statement outside the loop. Consider the following (over simplified) example:
a++;
while (a < n) {
a++;
}
can be written more concisely using
do {
a++;
} while (a < n)
Of course, this particular example can be written in an even more concise way as (assuming C syntax)
while (++a < n) {}
But I think you can see the point here.
while( someConditionMayBeFalse ){
// this will never run...
}
// then the alternative
do{
// this will run once even if the condition is false
while( someConditionMayBeFalse );
The difference is obvious and allows you to have code run and then evaluate the result to see if you have to "Do it again" and the other method of while allows you to have a block of script ignored if the conditional is not met.
I write mine pretty much exclusively testing at the top. It's less code, so for me at least, it's less potential to screw something up (e.g., copy-pasting the condition makes two places you always have to update it)
It really depends there are situations when you want to test at the top, others when you want to test at the bottom, and still others when you want to test in the middle.
However the example given seems absurd. If you are going to test at the top, don't use an if statement and test at the bottom, just use a while statement, that's what it is made for.
You should first think of the test as part of the loop code. If the test logically belongs at the start of the loop processing, then it's a top-of-the-loop test. If the test logically belongs at the end of the loop (i.e. it decides if the loop should continue to run), then it's probably a bottom-of-the-loop test.
You will have to do something fancy if the test logically belongs in them middle. :-)
I guess some people test at the bottom because you could save one or a few machine cycles by doing that 30 years ago.
To write code that is correct, one basically needs to perform a mental, perhaps informal proof of correctness.
To prove a loop correct, the standard way is to choose a loop invariant, and an induction proof. But skip the complicated words: what you do, informally, is figure out something that is true of each iteration of the loop, and that when the loop is done, what you wanted accomplished is now true. The loop invariant is false at the end, for the loop to terminate.
If the loop conditions map fairly easily to the invariant, and the invariant is at the top of the loop, and one infers that the invariant is true at the next iteration of the loop by working through the code of the loop, then it is easy to figure out that the loop is correct.
However, if the invariant is at the bottom of the loop, then unless you have an assertion just prior to the loop (a good practice) then it becomes more difficult because you have to essentially infer what that invariant should be, and that any code that ran before the loop makes the loop invariant true (since there is no loop precondition, code will execute in the loop). It just becomes that more difficult to prove correct, even if it is an informal in-your-head proof.
This isn't really an answer but a reiteration of something one of my lecturers said and it interested me at the time.
The two types of loop while..do and do..while are actually instances of a third more generic loop, which has the test somewhere in the middle.
begin loop
<Code block A>
loop condition
<Code block B>
end loop
Code block A is executed at least once and B is executed zero or more times, but isn't run on the very last (failing) iteration. a while loop is when code block a is empty and a do..while is when code block b is empty. But if you're writing a compiler, you might be interested in generalizing both cases to a loop like this.
In a typical Discrete Structures class in computer science, it's an easy proof that there is an equivalence mapping between the two.
Stylistically, I prefer while (easy-expr) { } when easy-expr is known up front and ready to go, and the loop doesn't have a lot of repeated overhead/initialization. I prefer do { } while (somewhat-less-easy-expr); when there is more repeated overhead and the condition may not be quite so simple to set up ahead of time. If I write an infinite loop, I always use while (true) { }. I can't explain why, but I just don't like writing for (;;) { }.
I would say it is bad practice to write if..do..while loops, for the simple reason that this increases the size of the code and causes code duplications. Code duplications are error prone and should be avoided, as any change to one part must be performed on the duplicate as well, which isn't always the case. Also, bigger code means a harder time on the cpu cache. Finally, it handles null cases, and solves head aches.
Only when the first loop is fundamentally different should one use do..while, say, if the code that makes you pass the loop condition (like initialization) is performed in the loop. Otherwise, if it certain that loop will never fall on the first iteration, then yes, a do..while is appropriate.
From my limited knowledge of code generation I think it may be a good idea to write bottom test loops since they enable the compiler to perform loop optimizations better. For bottom test loops it is guaranteed that the loop executes at least once. This means loop invariant code "dominates" the exit node. And thus can be safely moved just before the loop starts.

Continue Considered Harmful? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Should developers avoid using continue in C# or its equivalent in other languages to force the next iteration of a loop? Would arguments for or against overlap with arguments about Goto?
I think there should be more use of continue!
Too often I come across code like:
for (...)
{
if (!cond1)
{
if (!cond2)
{
... highly indented lines ...
}
}
}
instead of
for (...)
{
if (cond1 || cond2)
{
continue;
}
...
}
Use it to make the code more readable!
Is continue any more harmful than, say, break?
If anything, in the majority of cases where I encounter/use it, I find it makes code clearer and less spaghetti-like.
You can write good code with or without continue and you can write bad code with or without continue.
There probably is some overlap with arguments about goto, but as far as I'm concerned the use of continue is equivalent to using break statements (in loops) or return statement from anywhere in a method body - if used correctly it can simplify the code (less likely to contain bugs, easier to maintain).
There are not harmful keywords. There's only harmful uses of them.
Goto is not harmful per se, neither is continue. They need to be used carefully, that's all.
If continue is causing a problem with readability, then chances are you have other problems. For example, massive amounts of code inside a for loop. If you have to write large for loops, I would try to stick to using continue close to the top of the for loop. Otherwise, a continue buried deep in the middle of a for loop can easily be missed.
I like to use continue at the beginning of loops for handling simple if conditions.
To me it makes the code more readable since there is not extra nesting and you can see that I have explicitly dealt with these cases.
Is this the same reason that I would use a goto? Perhaps. I do use them for readability at times and to stop the nesting of code but I usually use them more for cleanup/error handling.
I'd say: "it depends".
If you have reasonably small loop code (where you can see the whole loop-code without scrolling) its usually ok to use a continue.
However, if the loops body is large (for example due to a big switch), and there is some followup code (say below the switch), you may easily introduce bugs by adding a continue and thus skipping over that code sometimes. I have encountered this in the heart of a bytecode interpreter, where some instrumentation code was sometimes not executed due to a continue in some case-branches.
This might be a somewhat artificially constructed case, but I generally try to avoid continue and use an if (but not nesting too deep as in the Rob's sample code).
I don't think continue could ever be as difficult as goto since continue never moves execution out of the code block that it is in.
If you are iterating through any kind of a result set, and performing operations on said results, for e.g within a for each, and if one particular result caused a problem, its rather useful in capturing an expected error (via try-catch), logging it, and moving on to the next result via continue. Continue is especially useful, imo, for unattended services that do jobs at odd hours, and one exception shouldn't affect the other x number of records.
As far as this programmer is concerned, Nested if/else considered harmful.
Using continue at the beginning of a loop to avoid iteration over unnecessary elements is not harmful and can be very useful, but using it in the middle of nested ifs and elses can turn the loop code into a complex maze, to understand and validate.
I think its usage avoidance is also the result of a semantic misunderstanding. People who does never see/write 'continue' keyword on their code, when seeing a code with continue can interpret it as "the continuation of the natural flow". If instead of continue we had next, for instance, I think more people would appreciate this valuable cursor feature.
goto can be used as a continue, but not the reverse.
You can "goto" anywhere, thus break flow control arbitrarily.
Thus continue, not nearly as harmful.
Others have hinted at it... but continue and break are enforced by the compiler and have their own associated rules. Goto has no such limitations, though the net effect might almost be the same, in some circumstances.
I do not consider continue or break to be harmful per se, though I'm sure either can be used poorly in a way that would make any sane programmer gag.
Continue is a really useful function in most languages, because it allows blocks of code to be skipped for certain conditions.
One alternative would be to uses boolean variables in if statements, but these would need to be reset after every use.
I'd say yes. To me, it just breaks the 'flow' of a fluidly-written piece of code.
Another argument could also be that if you stick to the basic keywords supported by most modern languages, then your program flow (if not the logic or code) could be ported to any other language. Having an unsupported keyword (ie, continue or goto) would break that.
It's really more of a personal preference, but I've never had to use it and don't really consider it an option when I'm writing new code. (same as goto.)
I believe the bottom line argument against continue is that it makes it harder to PROVE that the code is correct. This is prove in the mathematical sense. But it probably doesn't matter to you because no one has the resources to 'prove' a computer program that is significantly complex.
Enter the static-analysis tools. You may make things harder on them...
And the goto, that sounds like a nightmare for the same reasons but at any random place in code.
continue feels wrong to me. break gets you out of there, but continue seems just to be spaghetti.
On the other hand, you can emulate continue with break (at least in Java).
for (String str : strs) contLp: {
...
break contLp;
...
}
(This posting had an obvious bug in the above code for over a decade. That doesn't look good for break/continue.)
continue can be useful in some circumstances, but it still feels dirty to me. It might be time to introduce a new method.
for (char c : cs) {
final int i;
if ('0' <= c && c <= '9') {
i = c - '0';
} else if ('a' <= c && c <= 'z') {
i = c - 'a' + 10;
} else {
continue;
}
... use i ...
}
These uses should be very rare.

Should a function have only one return statement?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Are there good reasons why it's a better practice to have only one return statement in a function?
Or is it okay to return from a function as soon as it is logically correct to do so, meaning there may be many return statements in the function?
I often have several statements at the start of a method to return for "easy" situations. For example, this:
public void DoStuff(Foo foo)
{
if (foo != null)
{
...
}
}
... can be made more readable (IMHO) like this:
public void DoStuff(Foo foo)
{
if (foo == null) return;
...
}
So yes, I think it's fine to have multiple "exit points" from a function/method.
Nobody has mentioned or quoted Code Complete so I'll do it.
17.1 return
Minimize the number of returns in each routine. It's harder to understand a routine if, reading it at the bottom, you're unaware of the possibility that it returned somewhere above.
Use a return when it enhances readability. In certain routines, once you know the answer, you want to return it to the calling routine immediately. If the routine is defined in such a way that it doesn't require any cleanup, not returning immediately means that you have to write more code.
I would say it would be incredibly unwise to decide arbitrarily against multiple exit points as I have found the technique to be useful in practice over and over again, in fact I have often refactored existing code to multiple exit points for clarity. We can compare the two approaches thus:-
string fooBar(string s, int? i) {
string ret = "";
if(!string.IsNullOrEmpty(s) && i != null) {
var res = someFunction(s, i);
bool passed = true;
foreach(var r in res) {
if(!r.Passed) {
passed = false;
break;
}
}
if(passed) {
// Rest of code...
}
}
return ret;
}
Compare this to the code where multiple exit points are permitted:-
string fooBar(string s, int? i) {
var ret = "";
if(string.IsNullOrEmpty(s) || i == null) return null;
var res = someFunction(s, i);
foreach(var r in res) {
if(!r.Passed) return null;
}
// Rest of code...
return ret;
}
I think the latter is considerably clearer. As far as I can tell the criticism of multiple exit points is a rather archaic point of view these days.
I currently am working on a codebase where two of the people working on it blindly subscribe to the "single point of exit" theory and I can tell you that from experience, it's a horrible horrible practice. It makes code extremely difficult to maintain and I'll show you why.
With the "single point of exit" theory, you inevitably wind up with code that looks like this:
function()
{
HRESULT error = S_OK;
if(SUCCEEDED(Operation1()))
{
if(SUCCEEDED(Operation2()))
{
if(SUCCEEDED(Operation3()))
{
if(SUCCEEDED(Operation4()))
{
}
else
{
error = OPERATION4FAILED;
}
}
else
{
error = OPERATION3FAILED;
}
}
else
{
error = OPERATION2FAILED;
}
}
else
{
error = OPERATION1FAILED;
}
return error;
}
Not only does this make the code very hard to follow, but now say later on you need to go back and add an operation in between 1 and 2. You have to indent just about the entire freaking function, and good luck making sure all of your if/else conditions and braces are matched up properly.
This method makes code maintenance extremely difficult and error prone.
Structured programming says you should only ever have one return statement per function. This is to limit the complexity. Many people such as Martin Fowler argue that it is simpler to write functions with multiple return statements. He presents this argument in the classic refactoring book he wrote. This works well if you follow his other advice and write small functions. I agree with this point of view and only strict structured programming purists adhere to single return statements per function.
As Kent Beck notes when discussing guard clauses in Implementation Patterns making a routine have a single entry and exit point ...
"was to prevent the confusion possible
when jumping into and out of many
locations in the same routine. It made
good sense when applied to FORTRAN or
assembly language programs written
with lots of global data where even
understanding which statements were
executed was hard work ... with small methods and mostly local data, it is needlessly conservative."
I find a function written with guard clauses much easier to follow than one long nested bunch of if then else statements.
In a function that has no side-effects, there's no good reason to have more than a single return and you should write them in a functional style. In a method with side-effects, things are more sequential (time-indexed), so you write in an imperative style, using the return statement as a command to stop executing.
In other words, when possible, favor this style
return a > 0 ?
positively(a):
negatively(a);
over this
if (a > 0)
return positively(a);
else
return negatively(a);
If you find yourself writing several layers of nested conditions, there's probably a way you can refactor that, using predicate list for example. If you find that your ifs and elses are far apart syntactically, you might want to break that down into smaller functions. A conditional block that spans more than a screenful of text is hard to read.
There's no hard and fast rule that applies to every language. Something like having a single return statement won't make your code good. But good code will tend to allow you to write your functions that way.
I've seen it in coding standards for C++ that were a hang-over from C, as if you don't have RAII or other automatic memory management then you have to clean up for each return, which either means cut-and-paste of the clean-up or a goto (logically the same as 'finally' in managed languages), both of which are considered bad form. If your practices are to use smart pointers and collections in C++ or another automatic memory system, then there isn't a strong reason for it, and it become all about readability, and more of a judgement call.
I lean to the idea that return statements in the middle of the function are bad. You can use returns to build a few guard clauses at the top of the function, and of course tell the compiler what to return at the end of the function without issue, but returns in the middle of the function can be easy to miss and can make the function harder to interpret.
Are there good reasons why it's a better practice to have only one return statement in a function?
Yes, there are:
The single exit point gives an excellent place to assert your post-conditions.
Being able to put a debugger breakpoint on the one return at the end of the function is often useful.
Fewer returns means less complexity. Linear code is generally simpler to understand.
If trying to simplify a function to a single return causes complexity, then that's incentive to refactor to smaller, more general, easier-to-understand functions.
If you're in a language without destructors or if you don't use RAII, then a single return reduces the number of places you have to clean up.
Some languages require a single exit point (e.g., Pascal and Eiffel).
The question is often posed as a false dichotomy between multiple returns or deeply nested if statements. There's almost always a third solution which is very linear (no deep nesting) with only a single exit point.
Update: Apparently MISRA guidelines promote single exit, too.
To be clear, I'm not saying it's always wrong to have multiple returns. But given otherwise equivalent solutions, there are lots of good reasons to prefer the one with a single return.
Having a single exit point does provide an advantage in debugging, because it allows you to set a single breakpoint at the end of a function to see what value is actually going to be returned.
In general I try to have only a single exit point from a function. There are times, however, that doing so actually ends up creating a more complex function body than is necessary, in which case it's better to have multiple exit points. It really has to be a "judgement call" based on the resulting complexity, but the goal should be as few exit points as possible without sacrificing complexity and understandability.
No, because we don't live in the 1970s any more. If your function is long enough that multiple returns are a problem, it's too long.
(Quite apart from the fact that any multi-line function in a language with exceptions will have multiple exit points anyway.)
My preference would be for single exit unless it really complicates things. I have found that in some cases, multiple exist points can mask other more significant design problems:
public void DoStuff(Foo foo)
{
if (foo == null) return;
}
On seeing this code, I would immediately ask:
Is 'foo' ever null?
If so, how many clients of 'DoStuff' ever call the function with a null 'foo'?
Depending on the answers to these questions it might be that
the check is pointless as it never is true (ie. it should be an assertion)
the check is very rarely true and so it may be better to change those specific caller functions as they should probably take some other action anyway.
In both of the above cases the code can probably be reworked with an assertion to ensure that 'foo' is never null and the relevant callers changed.
There are two other reasons (specific I think to C++ code) where multiple exists can actually have a negative affect. They are code size, and compiler optimizations.
A non-POD C++ object in scope at the exit of a function will have its destructor called. Where there are several return statements, it may be the case that there are different objects in scope and so the list of destructors to call will be different. The compiler therefore needs to generate code for each return statement:
void foo (int i, int j) {
A a;
if (i > 0) {
B b;
return ; // Call dtor for 'b' followed by 'a'
}
if (i == j) {
C c;
B b;
return ; // Call dtor for 'b', 'c' and then 'a'
}
return 'a' // Call dtor for 'a'
}
If code size is an issue - then this may be something worth avoiding.
The other issue relates to "Named Return Value OptimiZation" (aka Copy Elision, ISO C++ '03 12.8/15). C++ allows an implementation to skip calling the copy constructor if it can:
A foo () {
A a1;
// do something
return a1;
}
void bar () {
A a2 ( foo() );
}
Just taking the code as is, the object 'a1' is constructed in 'foo' and then its copy construct will be called to construct 'a2'. However, copy elision allows the compiler to construct 'a1' in the same place on the stack as 'a2'. There is therefore no need to "copy" the object when the function returns.
Multiple exit points complicates the work of the compiler in trying to detect this, and at least for a relatively recent version of VC++ the optimization did not take place where the function body had multiple returns. See Named Return Value Optimization in Visual C++ 2005 for more details.
Having a single exit point reduces Cyclomatic Complexity and therefore, in theory, reduces the probability that you will introduce bugs into your code when you change it. Practice however, tends to suggest that a more pragmatic approach is needed. I therefore tend to aim to have a single exit point, but allow my code to have several if that is more readable.
I force myself to use only one return statement, as it will in a sense generate code smell. Let me explain:
function isCorrect($param1, $param2, $param3) {
$toret = false;
if ($param1 != $param2) {
if ($param1 == ($param3 * 2)) {
if ($param2 == ($param3 / 3)) {
$toret = true;
} else {
$error = 'Error 3';
}
} else {
$error = 'Error 2';
}
} else {
$error = 'Error 1';
}
return $toret;
}
(The conditions are arbritary...)
The more conditions, the larger the function gets, the more difficult it is to read. So if you're attuned to the code smell, you'll realise it, and want to refactor the code. Two possible solutions are:
Multiple returns
Refactoring into separate functions
Multiple Returns
function isCorrect($param1, $param2, $param3) {
if ($param1 == $param2) { $error = 'Error 1'; return false; }
if ($param1 != ($param3 * 2)) { $error = 'Error 2'; return false; }
if ($param2 != ($param3 / 3)) { $error = 'Error 3'; return false; }
return true;
}
Separate Functions
function isEqual($param1, $param2) {
return $param1 == $param2;
}
function isDouble($param1, $param2) {
return $param1 == ($param2 * 2);
}
function isThird($param1, $param2) {
return $param1 == ($param2 / 3);
}
function isCorrect($param1, $param2, $param3) {
return !isEqual($param1, $param2)
&& isDouble($param1, $param3)
&& isThird($param2, $param3);
}
Granted, it is longer and a bit messy, but in the process of refactoring the function this way, we've
created a number of reusable functions,
made the function more human readable, and
the focus of the functions is on why the values are correct.
I would say you should have as many as required, or any that make the code cleaner (such as guard clauses).
I have personally never heard/seen any "best practices" say that you should have only one return statement.
For the most part, I tend to exit a function as soon as possible based on a logic path (guard clauses are an excellent example of this).
I believe that multiple returns are usually good (in the code that I write in C#). The single-return style is a holdover from C. But you probably aren't coding in C.
There is no law requiring only one exit point for a method in all programming languages. Some people insist on the superiority of this style, and sometimes they elevate it to a "rule" or "law" but this belief is not backed up by any evidence or research.
More than one return style may be a bad habit in C code, where resources have to be explicitly de-allocated, but languages such as Java, C#, Python or JavaScript that have constructs such as automatic garbage collection and try..finally blocks (and using blocks in C#), and this argument does not apply - in these languages, it is very uncommon to need centralised manual resource deallocation.
There are cases where a single return is more readable, and cases where it isn't. See if it reduces the number of lines of code, makes the logic clearer or reduces the number of braces and indents or temporary variables.
Therefore, use as many returns as suits your artistic sensibilities, because it is a layout and readability issue, not a technical one.
I have talked about this at greater length on my blog.
There are good things to say about having a single exit-point, just as there are bad things to say about the inevitable "arrow" programming that results.
If using multiple exit points during input validation or resource allocation, I try to put all the 'error-exits' very visibly at the top of the function.
Both the Spartan Programming article of the "SSDSLPedia" and the single function exit point article of the "Portland Pattern Repository's Wiki" have some insightful arguments around this. Also, of course, there is this post to consider.
If you really want a single exit-point (in any non-exception-enabled language) for example in order to release resources in one single place, I find the careful application of goto to be good; see for example this rather contrived example (compressed to save screen real-estate):
int f(int y) {
int value = -1;
void *data = NULL;
if (y < 0)
goto clean;
if ((data = malloc(123)) == NULL)
goto clean;
/* More code */
value = 1;
clean:
free(data);
return value;
}
Personally I, in general, dislike arrow programming more than I dislike multiple exit-points, although both are useful when applied correctly. The best, of course, is to structure your program to require neither. Breaking down your function into multiple chunks usually help :)
Although when doing so, I find I end up with multiple exit points anyway as in this example, where some larger function has been broken down into several smaller functions:
int g(int y) {
value = 0;
if ((value = g0(y, value)) == -1)
return -1;
if ((value = g1(y, value)) == -1)
return -1;
return g2(y, value);
}
Depending on the project or coding guidelines, most of the boiler-plate code could be replaced by macros. As a side note, breaking it down this way makes the functions g0, g1 ,g2 very easy to test individually.
Obviously, in an OO and exception-enabled language, I wouldn't use if-statements like that (or at all, if I could get away with it with little enough effort), and the code would be much more plain. And non-arrowy. And most of the non-final returns would probably be exceptions.
In short;
Few returns are better than many returns
More than one return is better than huge arrows, and guard clauses are generally ok.
Exceptions could/should probably replace most 'guard clauses' when possible.
You know the adage - beauty is in the eyes of the beholder.
Some people swear by NetBeans and some by IntelliJ IDEA, some by Python and some by PHP.
In some shops you could lose your job if you insist on doing this:
public void hello()
{
if (....)
{
....
}
}
The question is all about visibility and maintainability.
I am addicted to using boolean algebra to reduce and simplify logic and use of state machines. However, there were past colleagues who believed my employ of "mathematical techniques" in coding is unsuitable, because it would not be visible and maintainable. And that would be a bad practice. Sorry people, the techniques I employ is very visible and maintainable to me - because when I return to the code six months later, I would understand the code clearly rather seeing a mess of proverbial spaghetti.
Hey buddy (like a former client used to say) do what you want as long as you know how to fix it when I need you to fix it.
I remember 20 years ago, a colleague of mine was fired for employing what today would be called agile development strategy. He had a meticulous incremental plan. But his manager was yelling at him "You can't incrementally release features to users! You must stick with the waterfall." His response to the manager was that incremental development would be more precise to customer's needs. He believed in developing for the customers needs, but the manager believed in coding to "customer's requirement".
We are frequently guilty for breaking data normalization, MVP and MVC boundaries. We inline instead of constructing a function. We take shortcuts.
Personally, I believe that PHP is bad practice, but what do I know. All the theoretical arguments boils down to trying fulfill one set of rules
quality = precision, maintainability
and profitability.
All other rules fade into the background. And of course this rule never fades:
Laziness is the virtue of a good
programmer.
I lean towards using guard clauses to return early and otherwise exit at the end of a method. The single entry and exit rule has historical significance and was particularly helpful when dealing with legacy code that ran to 10 A4 pages for a single C++ method with multiple returns (and many defects). More recently, accepted good practice is to keep methods small which makes multiple exits less of an impedance to understanding. In the following Kronoz example copied from above, the question is what occurs in //Rest of code...?:
void string fooBar(string s, int? i) {
if(string.IsNullOrEmpty(s) || i == null) return null;
var res = someFunction(s, i);
foreach(var r in res) {
if(!r.Passed) return null;
}
// Rest of code...
return ret;
}
I realise the example is somewhat contrived but I would be tempted to refactor the foreach loop into a LINQ statement that could then be considered a guard clause. Again, in a contrived example the intent of the code isn't apparent and someFunction() may have some other side effect or the result may be used in the // Rest of code....
if (string.IsNullOrEmpty(s) || i == null) return null;
if (someFunction(s, i).Any(r => !r.Passed)) return null;
Giving the following refactored function:
void string fooBar(string s, int? i) {
if (string.IsNullOrEmpty(s) || i == null) return null;
if (someFunction(s, i).Any(r => !r.Passed)) return null;
// Rest of code...
return ret;
}
One good reason I can think of is for code maintenance: you have a single point of exit. If you want to change the format of the result,..., it's just much simpler to implement. Also, for debugging, you can just stick a breakpoint there :)
Having said that, I once had to work in a library where the coding standards imposed 'one return statement per function', and I found it pretty tough. I write lots of numerical computations code, and there often are 'special cases', so the code ended up being quite hard to follow...
Multiple exit points are fine for small enough functions -- that is, a function that can be viewed on one screen length on its entirety. If a lengthy function likewise includes multiple exit points, it's a sign that the function can be chopped up further.
That said I avoid multiple-exit functions unless absolutely necessary. I have felt pain of bugs that are due to some stray return in some obscure line in more complex functions.
I've worked with terrible coding standards that forced a single exit path on you and the result is nearly always unstructured spaghetti if the function is anything but trivial -- you end up with lots of breaks and continues that just get in the way.
Single exit point - all other things equal - makes code significantly more readable.
But there's a catch: popular construction
resulttype res;
if if if...
return res;
is a fake, "res=" is not much better than "return". It has single return statement, but multiple points where function actually ends.
If you have function with multiple returns (or "res="s), it's often a good idea to break it into several smaller functions with single exit point.
My usual policy is to have only one return statement at the end of a function unless the complexity of the code is greatly reduced by adding more. In fact, I'm rather a fan of Eiffel, which enforces the only one return rule by having no return statement (there's just a auto-created 'result' variable to put your result in).
There certainly are cases where code can be made clearer with multiple returns than the obvious version without them would be. One could argue that more rework is needed if you have a function that is too complex to be understandable without multiple return statements, but sometimes it's good to be pragmatic about such things.
If you end up with more than a few returns there may be something wrong with your code. Otherwise I would agree that sometimes it is nice to be able to return from multiple places in a subroutine, especially when it make the code cleaner.
Perl 6: Bad Example
sub Int_to_String( Int i ){
given( i ){
when 0 { return "zero" }
when 1 { return "one" }
when 2 { return "two" }
when 3 { return "three" }
when 4 { return "four" }
...
default { return undef }
}
}
would be better written like this
Perl 6: Good Example
#Int_to_String = qw{
zero
one
two
three
four
...
}
sub Int_to_String( Int i ){
return undef if i < 0;
return undef unless i < #Int_to_String.length;
return #Int_to_String[i]
}
Note this is was just a quick example
I vote for Single return at the end as a guideline. This helps a common code clean-up handling ... For example, take a look at the following code ...
void ProcessMyFile (char *szFileName)
{
FILE *fp = NULL;
char *pbyBuffer = NULL:
do {
fp = fopen (szFileName, "r");
if (NULL == fp) {
break;
}
pbyBuffer = malloc (__SOME__SIZE___);
if (NULL == pbyBuffer) {
break;
}
/*** Do some processing with file ***/
} while (0);
if (pbyBuffer) {
free (pbyBuffer);
}
if (fp) {
fclose (fp);
}
}
This is probably an unusual perspective, but I think that anyone who believes that multiple return statements are to be favoured has never had to use a debugger on a microprocessor that supports only 4 hardware breakpoints. ;-)
While the issues of "arrow code" are completely correct, one issue that seems to go away when using multiple return statements is in the situation where you are using a debugger. You have no convenient catch-all position to put a breakpoint to guarantee that you're going to see the exit and hence the return condition.
The more return statements you have in a function, the higher complexity in that one method. If you find yourself wondering if you have too many return statements, you might want to ask yourself if you have too many lines of code in that function.
But, not, there is nothing wrong with one/many return statements. In some languages, it is a better practice (C++) than in others (C).