Returning values in Elixir? - function

I've recently decided to learn Elixir. Coming from a C++/Java/JavaScript background I've been having a lot of trouble grasping the basics. This might sound stupid but how would return statements work in Elixir? I've looked around and it seems as though it's just the last line of a function i.e.
def Hello do
"Hello World!"
end
Would this function return "Hello World!", is there another way to do a return? Also, how would you return early? in JavaScript we could write something like this to find if an array has a certain value in it:
function foo(a){
for(var i = 0;i<a.length;i++){
if(a[i] == "22"){
return true;
}
}
return false;
}
How would that work in Elixir?

Elixir has no 'break out' keyword that would be equivalent to the 'return' keyword in other languages.
Typically what you would do is re-structure your code so the last statement executed is the return value.
Below is an idiomatic way in elixir to perform the equivalent of your code test, assuming you meant 'a' as something that behaves kind of array like in your initial example:
def is_there_a_22(a) do
Enum.any?(a, fn(item) -> item == "22" end.)
end
What's actually going on here is we're restructuring our thinking a little bit. Instead of the procedural approach, where we'd search through the array and return early if we find what we were looking for, we're going to ask what you are really after in the code snippet:
"Does this array have a 22 anywhere?"
We are then going to use the elixir Enum library to run through the array for us, and provide the any? function with a test which will tell us if anything matches the criteria we cared about.
While this is a case that is easily solved with enumeration, I think it's possible the heart of your question applies more to things such as the 'bouncer pattern' used in procedural methods. For example, if I meet certain criteria in a method, return right away. A good example of this would be a method that returns false if the thing you're going to do work on is null:
function is_my_property_true(obj) {
if (obj == null) {
return false;
}
// .....lots of code....
return some_result_from_obj;
}
The only way to really accomplish this in elixir is to put the guard up front and factor out a method:
def test_property_val_from_obj(obj) do
# ...a bunch of code about to get the property
# want and see if it is true
end
def is_my_property_true(obj) do
case obj do
nil -> false
_ -> test_property_value_from_obj(obj)
end
end
tl;dr - No there isn't an equivalent - you need to structure your code accordingly. On the flip side, this tends to keep your methods small - and your intent clear.

You are correct in that last expression is always returned. Even more - there are no statements in the language - just expressions. Every construct has a value - if, case, etc. There is also no early return. This may seem limiting, but you quickly get used to it.
A normal way to solve your example with Elixir would be to use a higher order function:
def foo(list) do
Enum.any?(list, fn x -> x == "22" end)
end

Related

Why does 'return' end a function

I'm just curious about why return ends the function.
Why do we not write
function Foo (){
BAR = calculate();
give back BAR;
//do sth later
log(BAR);
end;
}
Why do we need to do this?
function Foo (){
BAR = calculate();
log(BAR);
return BAR;
}
Is this to prevent multiple usage of a give back/return value in a function?
The idea of a function stems from mathematics, e.g. x = f(y). Once you have computed f(y) for a specific value of y, you can simply substitute that value in that equation for the same result, e.g. x = 42. So the notion of a function having one result or one return value is quite strong. Further, such mathematical functions are pure, meaning they have no side effects. In the above formula it doesn’t make a difference whether you write f(y) or its computed result 42, the function doesn’t do anything else and hence won’t change the result. Being able to make these assumptions makes it much easier to reason about formulas and programs.
return in programming also has practical implementation implications, as most languages typically pop the stack upon returning, based on the assumption/restriction that it’s not needed any further.
Many languages do allow a function to “spit out” a value yet continue, which is usually implemented as generators and the yield keyword. However, the generator won’t typically simply continue running in the background, it needs to be explicitly invoked again to yield its next value. A transfer of control is necessary; either the generator runs, or its caller does, they can’t both run simultaneously.
If you did want to run two pieces of code simultaneously, be that a generator or a function’s “after return block”, you need to decide on a mode of multitasking like threading, or cooperative multitasking (async execution) or something else, which brings with it all the fun difficulties of managing shared resource access and the like. While it’s not unthinkable to write a language which would handle that implicitly and elegantly, elegant implicit multitasking which manages all these difficulties automagically simply does not fit into most C-like languages. Which is likely one of many reasons leading to a simple stack-popping, function-terminating return statement.
Using return gives you a lot of flexibility regarding where, when and how you return the value of a function as well as an easy to read statement of 'I am now returning this value'.
If following your idea, you could have a situation where the function got evaulated to some value and you have to figure out if that assignment got changed somewhere later in the flow.

Any research on maintainability of "guard statement" vs. "single function exit point" paradigm available?

I'm wondering if there has been any research (both casual and robust) on the maintainability of projects that use the "guard statement" paradigm vs. the "single function exit point" paradigm?
Guard statement example (in C#):
string GetSomeString()
{
if(necessaryConditionFails) { return null; }
if(!FunctionWithBoolReturn(someAttribute)) { return null; }
//all necessary conditions have been met
//do regular processing...
return finalStringValue;
}
single function exit point example (in C#):
string GetSomeString()
{
string valueToReturn = null;
if(necessaryConditionPasses && FunctionWithBoolReturn(someAttribute))
{
//all necessary conditions have been met
//do regular processing...
valueToReturn = finalStringValue;
}
return valueToReturn;
}
I know the merits and failings of both have been debated endlessly on SO, but I'm looking for actual research into how maintainable each paradigm is*. This may be unknown, but I figured if the information is out there, someone on SO would know where it was. My web searches have not been sucessful so far.
**I'm also aware that many programmers (including me) use both principles throughout their code, depending on the situation. I'm just hoping to discover which one has a proven track record of greater maintainability to use as the preferred paradigm.*
Forcing to have single exit points have some problems of their own.
The first one is that it may lead to complex constructions.
Image a function in which you have to open a file, read a line, convert the line to a number and return that number or zero if something goes wrong.
With a single exit point, you end up using lots of nested if's (if file exists open it, if open succeeds read the line, if read succeeds convert the value to an integer), which makes your code unreadable.
Some of it can be solved by having a label at the end of the function and using goto's (we had to use that in the past since we also used the single exit point and preferred readability) but it's not ideal.
Second, if you are using exceptions you are forced to catch everything if you want your single exit point again.
So, personally, I prefer putting lots of checks (and asserts) in the beginning and during the execution of the function, and exit the function at the first sign of trouble.

Can if statements be implemented as function calls?

One of the stylistic 'conventions' I find slightly irritating in published code, is the use of:
if(condition) {
instead of (my preference):
if (condition) {
A slight difference, and probably an unimportant one, but it occurred to me that the first style might be justified if 'if' statements were implemented as a kind of function call. Then I could stop objecting to it.
Does anyone know of a programming language where an if statement is implemented as a function call, where the argument is a single boolean expression?
EDIT: I realise the blocks following the if() are problematic, and the way I expressed my question was probably too naive, but I'm encouraged by the answers so far.
tcl is one language which implements if as a regular in built function/command which takes two parameters ; condition and the code block to execute
if {$vbl == 1} { puts "vbl is one" }
http://tmml.sourceforge.net/doc/tcl/if.html
In fact, all language constructs in tcl (for loop , while loop etc.) are implemented as commands/functions.
It's impossible for it to have a single argument since it has to decide which code path to follow, which would have to be done outside of said function. It would need at least two arguments, but three would allow an "else" condition.
Lisp's if has exactly the same syntax as any other macro in the language (it's not quite exactly a function, but the difference is minimal): (if cond then else)
Both the 'then' and 'else' clauses are left unevaluated unless the condition selects them.
In Smalltalk, an if statement is kind of a function call -- sort of, in (of course) a completely object oriented way, so it's really a method not a free function. I'm not sure how it would affect your thinking on syntax though, since the syntax is completely different, looking like:
someBoolean
ifTrue: [ do_something ]
ifFalse: [ do_something_else ]
Given that this doesn't contain any parentheses at all, you can probably interpret it as proving whatever you wanted to believe. :-)
If the if function is to be a regular function, then it can't just take the condition, it needs as its parameters the block of code to run depending on whether the condition evaluates to true or not.
A prototype for a function like that in C++ might be something along the lines of
void custom_if(bool cond, void (*block)());
This function can either call the block function, or not, depending on cond.
In some functional languages things are much easier. In Haskell, a simple function like:
if' True a _ = a
if' _ _ b = b
allows you to write code like this:
if' (1 == 1)
(putStrLn "Here")
(putStrLn "There")
which will always print Here.
I don't know of any languages where if(condition) is implemented as a regular function call, but Perl implements try { } catch { } etc.. {} as function calls.

True until disproven or false until proven?

I've noticed something about my coding that is slightly undefined. Say we have a two dimension array, a matrix or a table and we are looking through it to check if a property is true for every row or nested dimension.
Say I have a boolean flag that is to be used to check if a property is true or false. My options are to:
Initialize it to true and check each cell until proven false. This
gives it a wrong name until the code
is completely executed.
Start on false and check each row until proven true. Only if all rows are true will the data be correct. What is the cleanest way to do this, without a counter?
I've always done 1 without thinking but today it got me wondering. What about 2?
Depends on which one dumps you out of the loop first, IMHO.
For example, with an OR situation, I'd default to false, and as soon as you get a TRUE, return the result, otherwise return the default as the loop falls through.
For an AND situation, I'd do the opposite.
They actually both amount to the same thing and since you say "check if a property is true for every row or nested dimension", I believe the first method is easier to read and perhaps slightly faster.
You shouldn't try to read the value of the flag until the code is completely executed anyway, because the check isn't finished. If you're running asynchronous code, you should guard against accessing the value while it is unstable.
Both methods "give a wrong name" until the code is executed. 1 gives false positives and 2 gives false negatives. I'm not sure what you're trying to avoid by saying this - if you can get the "correct" value before fully running your code, you didn't have run your code in the first place.
How to implement each without a counter (if you don't have a foreach syntax in your language, use the appropriate enumerator->next loop syntax):
1:
bool flag = true;
foreach(item in array)
{
if(!check(item))
{
flag = false;
break;
}
}
2:
bool flag = false;
foreach(item in array)
{
if(!check(item))
{
break;
}
else if(item.IsLast())
{
flag = true;
}
}
Go with the first option. An algorithm always has preconditions, postconditions and invariants. If your invariant is "bool x is true iff all rows from 0-currentN have a positve property", then everything is fine.
Don't make your algorithm more complex just to make the full program-state valid per row-iteration. Refactor the method, extract it, and make it "atomic" with your languages mechanics (Java: synchronized).
Personally I just throw the whole loop into a somewhat reusable method/function called isPropertyAlwaysTrue(property, array[][]) and have it return a false directly if it finds that it finds a case where it's not true.
The inversion of logic, by the way, does not get you out of there any quicker. For instance, if you want the first non-true case, saying areAnyFalse or areAllTrue will have an inverted output, but will have to test the exact same cases.
This is also the case with areAnyTrue and areAllFalse--different words for the exact same algorithm (return as soon as you find a true).
You cannot compare areAnyFalse with areAnyTrue because they are testing for a completely different situation.
Make the property name something like isThisTrue. Then it's answered "yes" or "no" but it's always meaningful.
In Ruby and Scheme you can use a question mark in the name: isThisTrue?. In a lot of other languages, there is a convention of puttng "p" for "predicate" on the name -- null-p for a test returning true or false, in LISP.
I agree with Will Hartung.
If you are worried about (1) then just choose a better name for your boolean. IsNotSomething instead of IsSomething.

Should a function have only one return statement?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Are there good reasons why it's a better practice to have only one return statement in a function?
Or is it okay to return from a function as soon as it is logically correct to do so, meaning there may be many return statements in the function?
I often have several statements at the start of a method to return for "easy" situations. For example, this:
public void DoStuff(Foo foo)
{
if (foo != null)
{
...
}
}
... can be made more readable (IMHO) like this:
public void DoStuff(Foo foo)
{
if (foo == null) return;
...
}
So yes, I think it's fine to have multiple "exit points" from a function/method.
Nobody has mentioned or quoted Code Complete so I'll do it.
17.1 return
Minimize the number of returns in each routine. It's harder to understand a routine if, reading it at the bottom, you're unaware of the possibility that it returned somewhere above.
Use a return when it enhances readability. In certain routines, once you know the answer, you want to return it to the calling routine immediately. If the routine is defined in such a way that it doesn't require any cleanup, not returning immediately means that you have to write more code.
I would say it would be incredibly unwise to decide arbitrarily against multiple exit points as I have found the technique to be useful in practice over and over again, in fact I have often refactored existing code to multiple exit points for clarity. We can compare the two approaches thus:-
string fooBar(string s, int? i) {
string ret = "";
if(!string.IsNullOrEmpty(s) && i != null) {
var res = someFunction(s, i);
bool passed = true;
foreach(var r in res) {
if(!r.Passed) {
passed = false;
break;
}
}
if(passed) {
// Rest of code...
}
}
return ret;
}
Compare this to the code where multiple exit points are permitted:-
string fooBar(string s, int? i) {
var ret = "";
if(string.IsNullOrEmpty(s) || i == null) return null;
var res = someFunction(s, i);
foreach(var r in res) {
if(!r.Passed) return null;
}
// Rest of code...
return ret;
}
I think the latter is considerably clearer. As far as I can tell the criticism of multiple exit points is a rather archaic point of view these days.
I currently am working on a codebase where two of the people working on it blindly subscribe to the "single point of exit" theory and I can tell you that from experience, it's a horrible horrible practice. It makes code extremely difficult to maintain and I'll show you why.
With the "single point of exit" theory, you inevitably wind up with code that looks like this:
function()
{
HRESULT error = S_OK;
if(SUCCEEDED(Operation1()))
{
if(SUCCEEDED(Operation2()))
{
if(SUCCEEDED(Operation3()))
{
if(SUCCEEDED(Operation4()))
{
}
else
{
error = OPERATION4FAILED;
}
}
else
{
error = OPERATION3FAILED;
}
}
else
{
error = OPERATION2FAILED;
}
}
else
{
error = OPERATION1FAILED;
}
return error;
}
Not only does this make the code very hard to follow, but now say later on you need to go back and add an operation in between 1 and 2. You have to indent just about the entire freaking function, and good luck making sure all of your if/else conditions and braces are matched up properly.
This method makes code maintenance extremely difficult and error prone.
Structured programming says you should only ever have one return statement per function. This is to limit the complexity. Many people such as Martin Fowler argue that it is simpler to write functions with multiple return statements. He presents this argument in the classic refactoring book he wrote. This works well if you follow his other advice and write small functions. I agree with this point of view and only strict structured programming purists adhere to single return statements per function.
As Kent Beck notes when discussing guard clauses in Implementation Patterns making a routine have a single entry and exit point ...
"was to prevent the confusion possible
when jumping into and out of many
locations in the same routine. It made
good sense when applied to FORTRAN or
assembly language programs written
with lots of global data where even
understanding which statements were
executed was hard work ... with small methods and mostly local data, it is needlessly conservative."
I find a function written with guard clauses much easier to follow than one long nested bunch of if then else statements.
In a function that has no side-effects, there's no good reason to have more than a single return and you should write them in a functional style. In a method with side-effects, things are more sequential (time-indexed), so you write in an imperative style, using the return statement as a command to stop executing.
In other words, when possible, favor this style
return a > 0 ?
positively(a):
negatively(a);
over this
if (a > 0)
return positively(a);
else
return negatively(a);
If you find yourself writing several layers of nested conditions, there's probably a way you can refactor that, using predicate list for example. If you find that your ifs and elses are far apart syntactically, you might want to break that down into smaller functions. A conditional block that spans more than a screenful of text is hard to read.
There's no hard and fast rule that applies to every language. Something like having a single return statement won't make your code good. But good code will tend to allow you to write your functions that way.
I've seen it in coding standards for C++ that were a hang-over from C, as if you don't have RAII or other automatic memory management then you have to clean up for each return, which either means cut-and-paste of the clean-up or a goto (logically the same as 'finally' in managed languages), both of which are considered bad form. If your practices are to use smart pointers and collections in C++ or another automatic memory system, then there isn't a strong reason for it, and it become all about readability, and more of a judgement call.
I lean to the idea that return statements in the middle of the function are bad. You can use returns to build a few guard clauses at the top of the function, and of course tell the compiler what to return at the end of the function without issue, but returns in the middle of the function can be easy to miss and can make the function harder to interpret.
Are there good reasons why it's a better practice to have only one return statement in a function?
Yes, there are:
The single exit point gives an excellent place to assert your post-conditions.
Being able to put a debugger breakpoint on the one return at the end of the function is often useful.
Fewer returns means less complexity. Linear code is generally simpler to understand.
If trying to simplify a function to a single return causes complexity, then that's incentive to refactor to smaller, more general, easier-to-understand functions.
If you're in a language without destructors or if you don't use RAII, then a single return reduces the number of places you have to clean up.
Some languages require a single exit point (e.g., Pascal and Eiffel).
The question is often posed as a false dichotomy between multiple returns or deeply nested if statements. There's almost always a third solution which is very linear (no deep nesting) with only a single exit point.
Update: Apparently MISRA guidelines promote single exit, too.
To be clear, I'm not saying it's always wrong to have multiple returns. But given otherwise equivalent solutions, there are lots of good reasons to prefer the one with a single return.
Having a single exit point does provide an advantage in debugging, because it allows you to set a single breakpoint at the end of a function to see what value is actually going to be returned.
In general I try to have only a single exit point from a function. There are times, however, that doing so actually ends up creating a more complex function body than is necessary, in which case it's better to have multiple exit points. It really has to be a "judgement call" based on the resulting complexity, but the goal should be as few exit points as possible without sacrificing complexity and understandability.
No, because we don't live in the 1970s any more. If your function is long enough that multiple returns are a problem, it's too long.
(Quite apart from the fact that any multi-line function in a language with exceptions will have multiple exit points anyway.)
My preference would be for single exit unless it really complicates things. I have found that in some cases, multiple exist points can mask other more significant design problems:
public void DoStuff(Foo foo)
{
if (foo == null) return;
}
On seeing this code, I would immediately ask:
Is 'foo' ever null?
If so, how many clients of 'DoStuff' ever call the function with a null 'foo'?
Depending on the answers to these questions it might be that
the check is pointless as it never is true (ie. it should be an assertion)
the check is very rarely true and so it may be better to change those specific caller functions as they should probably take some other action anyway.
In both of the above cases the code can probably be reworked with an assertion to ensure that 'foo' is never null and the relevant callers changed.
There are two other reasons (specific I think to C++ code) where multiple exists can actually have a negative affect. They are code size, and compiler optimizations.
A non-POD C++ object in scope at the exit of a function will have its destructor called. Where there are several return statements, it may be the case that there are different objects in scope and so the list of destructors to call will be different. The compiler therefore needs to generate code for each return statement:
void foo (int i, int j) {
A a;
if (i > 0) {
B b;
return ; // Call dtor for 'b' followed by 'a'
}
if (i == j) {
C c;
B b;
return ; // Call dtor for 'b', 'c' and then 'a'
}
return 'a' // Call dtor for 'a'
}
If code size is an issue - then this may be something worth avoiding.
The other issue relates to "Named Return Value OptimiZation" (aka Copy Elision, ISO C++ '03 12.8/15). C++ allows an implementation to skip calling the copy constructor if it can:
A foo () {
A a1;
// do something
return a1;
}
void bar () {
A a2 ( foo() );
}
Just taking the code as is, the object 'a1' is constructed in 'foo' and then its copy construct will be called to construct 'a2'. However, copy elision allows the compiler to construct 'a1' in the same place on the stack as 'a2'. There is therefore no need to "copy" the object when the function returns.
Multiple exit points complicates the work of the compiler in trying to detect this, and at least for a relatively recent version of VC++ the optimization did not take place where the function body had multiple returns. See Named Return Value Optimization in Visual C++ 2005 for more details.
Having a single exit point reduces Cyclomatic Complexity and therefore, in theory, reduces the probability that you will introduce bugs into your code when you change it. Practice however, tends to suggest that a more pragmatic approach is needed. I therefore tend to aim to have a single exit point, but allow my code to have several if that is more readable.
I force myself to use only one return statement, as it will in a sense generate code smell. Let me explain:
function isCorrect($param1, $param2, $param3) {
$toret = false;
if ($param1 != $param2) {
if ($param1 == ($param3 * 2)) {
if ($param2 == ($param3 / 3)) {
$toret = true;
} else {
$error = 'Error 3';
}
} else {
$error = 'Error 2';
}
} else {
$error = 'Error 1';
}
return $toret;
}
(The conditions are arbritary...)
The more conditions, the larger the function gets, the more difficult it is to read. So if you're attuned to the code smell, you'll realise it, and want to refactor the code. Two possible solutions are:
Multiple returns
Refactoring into separate functions
Multiple Returns
function isCorrect($param1, $param2, $param3) {
if ($param1 == $param2) { $error = 'Error 1'; return false; }
if ($param1 != ($param3 * 2)) { $error = 'Error 2'; return false; }
if ($param2 != ($param3 / 3)) { $error = 'Error 3'; return false; }
return true;
}
Separate Functions
function isEqual($param1, $param2) {
return $param1 == $param2;
}
function isDouble($param1, $param2) {
return $param1 == ($param2 * 2);
}
function isThird($param1, $param2) {
return $param1 == ($param2 / 3);
}
function isCorrect($param1, $param2, $param3) {
return !isEqual($param1, $param2)
&& isDouble($param1, $param3)
&& isThird($param2, $param3);
}
Granted, it is longer and a bit messy, but in the process of refactoring the function this way, we've
created a number of reusable functions,
made the function more human readable, and
the focus of the functions is on why the values are correct.
I would say you should have as many as required, or any that make the code cleaner (such as guard clauses).
I have personally never heard/seen any "best practices" say that you should have only one return statement.
For the most part, I tend to exit a function as soon as possible based on a logic path (guard clauses are an excellent example of this).
I believe that multiple returns are usually good (in the code that I write in C#). The single-return style is a holdover from C. But you probably aren't coding in C.
There is no law requiring only one exit point for a method in all programming languages. Some people insist on the superiority of this style, and sometimes they elevate it to a "rule" or "law" but this belief is not backed up by any evidence or research.
More than one return style may be a bad habit in C code, where resources have to be explicitly de-allocated, but languages such as Java, C#, Python or JavaScript that have constructs such as automatic garbage collection and try..finally blocks (and using blocks in C#), and this argument does not apply - in these languages, it is very uncommon to need centralised manual resource deallocation.
There are cases where a single return is more readable, and cases where it isn't. See if it reduces the number of lines of code, makes the logic clearer or reduces the number of braces and indents or temporary variables.
Therefore, use as many returns as suits your artistic sensibilities, because it is a layout and readability issue, not a technical one.
I have talked about this at greater length on my blog.
There are good things to say about having a single exit-point, just as there are bad things to say about the inevitable "arrow" programming that results.
If using multiple exit points during input validation or resource allocation, I try to put all the 'error-exits' very visibly at the top of the function.
Both the Spartan Programming article of the "SSDSLPedia" and the single function exit point article of the "Portland Pattern Repository's Wiki" have some insightful arguments around this. Also, of course, there is this post to consider.
If you really want a single exit-point (in any non-exception-enabled language) for example in order to release resources in one single place, I find the careful application of goto to be good; see for example this rather contrived example (compressed to save screen real-estate):
int f(int y) {
int value = -1;
void *data = NULL;
if (y < 0)
goto clean;
if ((data = malloc(123)) == NULL)
goto clean;
/* More code */
value = 1;
clean:
free(data);
return value;
}
Personally I, in general, dislike arrow programming more than I dislike multiple exit-points, although both are useful when applied correctly. The best, of course, is to structure your program to require neither. Breaking down your function into multiple chunks usually help :)
Although when doing so, I find I end up with multiple exit points anyway as in this example, where some larger function has been broken down into several smaller functions:
int g(int y) {
value = 0;
if ((value = g0(y, value)) == -1)
return -1;
if ((value = g1(y, value)) == -1)
return -1;
return g2(y, value);
}
Depending on the project or coding guidelines, most of the boiler-plate code could be replaced by macros. As a side note, breaking it down this way makes the functions g0, g1 ,g2 very easy to test individually.
Obviously, in an OO and exception-enabled language, I wouldn't use if-statements like that (or at all, if I could get away with it with little enough effort), and the code would be much more plain. And non-arrowy. And most of the non-final returns would probably be exceptions.
In short;
Few returns are better than many returns
More than one return is better than huge arrows, and guard clauses are generally ok.
Exceptions could/should probably replace most 'guard clauses' when possible.
You know the adage - beauty is in the eyes of the beholder.
Some people swear by NetBeans and some by IntelliJ IDEA, some by Python and some by PHP.
In some shops you could lose your job if you insist on doing this:
public void hello()
{
if (....)
{
....
}
}
The question is all about visibility and maintainability.
I am addicted to using boolean algebra to reduce and simplify logic and use of state machines. However, there were past colleagues who believed my employ of "mathematical techniques" in coding is unsuitable, because it would not be visible and maintainable. And that would be a bad practice. Sorry people, the techniques I employ is very visible and maintainable to me - because when I return to the code six months later, I would understand the code clearly rather seeing a mess of proverbial spaghetti.
Hey buddy (like a former client used to say) do what you want as long as you know how to fix it when I need you to fix it.
I remember 20 years ago, a colleague of mine was fired for employing what today would be called agile development strategy. He had a meticulous incremental plan. But his manager was yelling at him "You can't incrementally release features to users! You must stick with the waterfall." His response to the manager was that incremental development would be more precise to customer's needs. He believed in developing for the customers needs, but the manager believed in coding to "customer's requirement".
We are frequently guilty for breaking data normalization, MVP and MVC boundaries. We inline instead of constructing a function. We take shortcuts.
Personally, I believe that PHP is bad practice, but what do I know. All the theoretical arguments boils down to trying fulfill one set of rules
quality = precision, maintainability
and profitability.
All other rules fade into the background. And of course this rule never fades:
Laziness is the virtue of a good
programmer.
I lean towards using guard clauses to return early and otherwise exit at the end of a method. The single entry and exit rule has historical significance and was particularly helpful when dealing with legacy code that ran to 10 A4 pages for a single C++ method with multiple returns (and many defects). More recently, accepted good practice is to keep methods small which makes multiple exits less of an impedance to understanding. In the following Kronoz example copied from above, the question is what occurs in //Rest of code...?:
void string fooBar(string s, int? i) {
if(string.IsNullOrEmpty(s) || i == null) return null;
var res = someFunction(s, i);
foreach(var r in res) {
if(!r.Passed) return null;
}
// Rest of code...
return ret;
}
I realise the example is somewhat contrived but I would be tempted to refactor the foreach loop into a LINQ statement that could then be considered a guard clause. Again, in a contrived example the intent of the code isn't apparent and someFunction() may have some other side effect or the result may be used in the // Rest of code....
if (string.IsNullOrEmpty(s) || i == null) return null;
if (someFunction(s, i).Any(r => !r.Passed)) return null;
Giving the following refactored function:
void string fooBar(string s, int? i) {
if (string.IsNullOrEmpty(s) || i == null) return null;
if (someFunction(s, i).Any(r => !r.Passed)) return null;
// Rest of code...
return ret;
}
One good reason I can think of is for code maintenance: you have a single point of exit. If you want to change the format of the result,..., it's just much simpler to implement. Also, for debugging, you can just stick a breakpoint there :)
Having said that, I once had to work in a library where the coding standards imposed 'one return statement per function', and I found it pretty tough. I write lots of numerical computations code, and there often are 'special cases', so the code ended up being quite hard to follow...
Multiple exit points are fine for small enough functions -- that is, a function that can be viewed on one screen length on its entirety. If a lengthy function likewise includes multiple exit points, it's a sign that the function can be chopped up further.
That said I avoid multiple-exit functions unless absolutely necessary. I have felt pain of bugs that are due to some stray return in some obscure line in more complex functions.
I've worked with terrible coding standards that forced a single exit path on you and the result is nearly always unstructured spaghetti if the function is anything but trivial -- you end up with lots of breaks and continues that just get in the way.
Single exit point - all other things equal - makes code significantly more readable.
But there's a catch: popular construction
resulttype res;
if if if...
return res;
is a fake, "res=" is not much better than "return". It has single return statement, but multiple points where function actually ends.
If you have function with multiple returns (or "res="s), it's often a good idea to break it into several smaller functions with single exit point.
My usual policy is to have only one return statement at the end of a function unless the complexity of the code is greatly reduced by adding more. In fact, I'm rather a fan of Eiffel, which enforces the only one return rule by having no return statement (there's just a auto-created 'result' variable to put your result in).
There certainly are cases where code can be made clearer with multiple returns than the obvious version without them would be. One could argue that more rework is needed if you have a function that is too complex to be understandable without multiple return statements, but sometimes it's good to be pragmatic about such things.
If you end up with more than a few returns there may be something wrong with your code. Otherwise I would agree that sometimes it is nice to be able to return from multiple places in a subroutine, especially when it make the code cleaner.
Perl 6: Bad Example
sub Int_to_String( Int i ){
given( i ){
when 0 { return "zero" }
when 1 { return "one" }
when 2 { return "two" }
when 3 { return "three" }
when 4 { return "four" }
...
default { return undef }
}
}
would be better written like this
Perl 6: Good Example
#Int_to_String = qw{
zero
one
two
three
four
...
}
sub Int_to_String( Int i ){
return undef if i < 0;
return undef unless i < #Int_to_String.length;
return #Int_to_String[i]
}
Note this is was just a quick example
I vote for Single return at the end as a guideline. This helps a common code clean-up handling ... For example, take a look at the following code ...
void ProcessMyFile (char *szFileName)
{
FILE *fp = NULL;
char *pbyBuffer = NULL:
do {
fp = fopen (szFileName, "r");
if (NULL == fp) {
break;
}
pbyBuffer = malloc (__SOME__SIZE___);
if (NULL == pbyBuffer) {
break;
}
/*** Do some processing with file ***/
} while (0);
if (pbyBuffer) {
free (pbyBuffer);
}
if (fp) {
fclose (fp);
}
}
This is probably an unusual perspective, but I think that anyone who believes that multiple return statements are to be favoured has never had to use a debugger on a microprocessor that supports only 4 hardware breakpoints. ;-)
While the issues of "arrow code" are completely correct, one issue that seems to go away when using multiple return statements is in the situation where you are using a debugger. You have no convenient catch-all position to put a breakpoint to guarantee that you're going to see the exit and hence the return condition.
The more return statements you have in a function, the higher complexity in that one method. If you find yourself wondering if you have too many return statements, you might want to ask yourself if you have too many lines of code in that function.
But, not, there is nothing wrong with one/many return statements. In some languages, it is a better practice (C++) than in others (C).