Consider these to options
if(successful)
{
if(condition)
{
//do something
}
if(condition)
{
//do something
}
...
}
or
if(successful)&&(condition)
{
//do something
}
if(successful)&&(condition)
{
//do something
}
...
Imagine there 100 if statements.
Is there any difference in efficiency?
Thanks in advance.
There are two correct answers for this. Everything else is nonsense.
Stop worrying about micro-optimizations like this unless you have proven the need for them. This can only be done by measuring and confirming that the code you're looking at is a bottleneck. (Hint: your intuitions in matters like this are almost, but not quite, always wrong.)
If you have successfully proven that your code is a bottleneck, try both ways and measure the results. Nobody here is going to be able to answer this question for you unless they happen to have identical hardware running on an identical operating system and are compiling with an identical compiler.
Make your code correct first. Then measure it for performance. Then optimize if needed. Everything else is nonsense.
That all depends how costly it is to evaluate the successful expression.
You should also note that the two versions are not semantically equivalent as the evaluation of the if-expression might have side-effects1.
If you are actually facing performance issues then measure, don't guess. Measuring will be the only way to see what the performance really is.
1To explain a question from the comments, here is a simple example where you would get different behavior:
The method CreateProcess has the side-effect of starting a new process and indicates the successful creation by returning true:
bool CreateProcess(string filename, out handle) { ... }
if (CreateProcess("program.exe", out handle))
{
if (someCondition)
{
handle.SomeMethod(...);
}
if (someOtherCondition)
{
handle.SomeOtherMethod(...);
}
}
This is quite different from the following:
if (CreateProcess("program.exe", out handle) && someCondition)
{
handle.SomeMethod(...);
}
if (CreateProcess("program.exe", out handle) && someOtherCondition)
{
handle.SomeOtherMethod(...);
}
Both are O(1). Anything else is depends on the language/compiler/optimizer you use.
Let me start out with stating that I completely agree with JMcO
However I find it can be interesting to think about the differences. (Might be because I working on a compiler, where the statement about optimizing (output) has to be done upfront and not based on measurements but assumptions on/knowledge of the general use of the compiler)
There's no one answer to your question, there's simply to many aspects that might affect the performance.
is success a value or a method call
Does the compiler create short circuiting for && operations (bailing out if the lhs is false) or does it always evaluate right and left hand side of the &&
What kind of branch prediction is the processor using?
How's the compiler treating compounded conditions vs nested? The compiler might produce the same binaries for the two examples you're providing though it most like won't (when the compiler can verify that the outer condition has no side effects)
What kinds of optimizations are the compiler performing? Could some of them radically change the binaries (e.g. skip the condition if it can be found to be constant or at least known before hand?)
the list could be elongated but my point is eventhough it can be interesting to think about what could affect the performance with out knowing a lot about both build and execution environment it's basically impossible to predict performance
Related
This question already has answers here:
Should a function have only one return statement?
(50 answers)
Closed 9 years ago.
Sometimes me have indicated that you can not put a return in the middle of a conditional or a loop, because it breaks the process. However, now have indicated to me that if you can do, and is better. I'm confused. Usually would happen in a function
Can you put a return? Is not it? Why? Or doesn't it make any difference?
Example:
if (i == 0)
{
//other code
return true;
}
else
{
//other code
return false;
}
or
if (i == 0)
{
//other code
b= true;
}
else
{
//other code
b= false;
}
return b;
Your two examples are basically equivalent in functionality, and either will work. In fact, an optimizing compiler may easily turn your second example into your first.
Most programmers would likely prefer the first as the intent is clearer.
It's better to have a single return at the bottom. That way, you have only one point of entry and one point of exit. It is much easier to debug code when you don't have to worry about where it will exit. This is not big deal with very short methods, but for long ones that go on for a few hundred lines, it is much cleaner.
I don't see any practical implication of returning in the middle of a loop. If you hear people saying you shouldn't, then it must be on the basis of readability of the code. If you have multiple exit points from the function, it might make some code ugly. Also, most of the time, you have to do some cleanup before exiting the routine. So, generally programmers tend to keep the cleanup routine at one place and always exit through that path. if you have multiple exit points then you have to add the clean up routines in all these places, that makes code duplication and again ruin the readability of the code. I have seen codes with returns spread all over the places and eventually failing to do the clean up properly and causing memory leaks.
The bigger problem is, most of the time the code you write now lives for a long time and the maintainer keeps changing, and at some point people doesn't understand the whole intent of all the lines of code present. that will add in to all these confusion.
All that said, I have seen a lot very beautifully written code with returns in the middle of loops.
This is a choice of style rather than it being a rule or a matter of performance. The second code example follows the "single entry, single exit" approach, where the code within the function only enters from the top and only exits from the bottom. The idea behind this is that this is more "safe" and easier to follow the code flow. The safety comes into play when you have manually set dynamic storage: with a single point of return, you can ensure that you free all the memory. Of course, languages like java and C# do dynamic storage for you, so this isn't really an issue. Also, if you're exiting multiple times in the middle of a function (particularly if it's very long), it might be hard to keep track of what causes the function to return.
However, choosing to exit only at the bottom of a function can create its own problems, as you may sometimes need to keep track of more state by setting and checking flags.
As for your original question, it certainly does not break anything in modern programming languages; it's all up to you. Go with the way you find easier to follow.
After reading To ternary or not to ternary? and Is this a reasonable use of the ternary operator?, I gathered that simple uses of the ternary operator are generally accepted, because they do not hurt readability. I also gathered that having one side of the ternary block return null when you don't want it to do something is a complete waste.. However, I ran across this case while refactoring my site that made me wrinkle my nose:
if ($success) {
$database->commit();
} else {
$database->rollback();
}
I refactored this down to
$success ? $database->commit() : $database->rollback();
And I was pretty satisfied with it.. but something inside me made me come here for input. Exception catching aside, would you consider this an okay use case? Am I wondering if this is an okay use because I have never done this before, or because it really is bad practice? This doesn't seem difficult to me, but would this seem difficult to understand for anyone else? Does it depend on the language.. as in, would this be more/less wrong in C, C++, or Java?
No, it is not OK. You are turning something that should look like a statement into something that looks like an expression. In fact, if commit() and rollback() return void, this will not compile in Java at least (not sure about the others mentioned).
If you want a one-liner, you should rather create another method on the $database object such as $database->endTransaction($success) that does the if statement internally.
I would be more inclined to use it in case the two actions are mutually-exclusive and/or opposite (yet related to each other), for example:
$success ? go_up() : go_down();
For two unrelated actions I would be less inclined to use it, the reason being that there is a higher probability for one of the branches to need expanding in the future. If that's the case, you will again need to rewrite it as an if-else statement. Imagine that you have:
$success ? do_abc() : do_xyz();
If at some point you decide that the first branch needs to do_def() as well, you'll need to rewrite the whole thing to an if-else statement again.
The more frequent usage of the ternary operator, however, is:
$var = $success ? UP : DOWN;
This way you are evaluating it as an expression, not as a statement.
The real question is, "Is the ternary form more or less readable than the if form?". I'd say it isn't. But this is a question of style, not of function.
So I'm currently trying to grasp the concept of recursion, and I understand most of the problems that I've encountered, but I feel as though its use wouldn't be applicable to too many computing issues. This is just a novice's assumption though, so I'm asking, are there many practical uses for recursion as a programmer? And also, what typical problems can be solved with it? The only ones that I've seen are heap sort and brain teaser-type problems like "The Towers of Hanoi which just seems very specific and lacking broad use.
Thanks
There are a plethora of uses for recursion in programming - a classic example being navigating a tree structure, where you'd call the navigation function with each child element discovered, etc.
Here are some fields which would be almost impossible without recursion:
XML, HTML or any other tree like document structure
Compilation and parsing
Natural Language Processing
Divide and conquer algorithms
Many mathematical concepts, e.g. factorials
Recursion can lead to brilliantly elegant solutions to otherwise complex problems. If you're at all interested in programming as an art, you really should delve deeper.
Oh and if you're not sure, here's a solid definition of recursion:
Recursion (noun): See "Recursion"
It depends on what you're going to be doing I suppose. I probably write less than one recursive function a year as a C#/ASP.NET developer doing corporate web work. When I'm screwing around with my hobby code (mostly stat research) I find a lot more opportunities to apply recursion. Part of this is subject matter, part of it is that I'm much more reliant on 3rd party libraries that the client has already decided on when doing corporate work (where the algorithms needing recursion are implemented).
It's not something you use every day. But many algorithms about searching and sorting data can make use of it. In general, most recursive algorithms can also be written using iteration; oftentimes the recursive version is simpler.
If you check the questions which are listed as "Related" to this question, you will find a "plethora" of stuff about recursion that will help you to understand it better.
Recursion isn't something new, and it is not just a toy concept. Recursive algorithms have been around since before there were computers.
The classic definition of "factorial" is a prime example:
fact(x) =
if x < 0 then fact(x) is undefined
if x = 0 then fact(0) = 1
if x > 0 then fact(x) = x * fact(x-1)
This isn't something that was created by computer geeks who thought that recursion was a cool toy. This is the standard mathematical definition.
Call recursion, as a program construct, is something that should almost never be used except in extremely high-level languages where you expect the compiler to optimize it to a different construct. Use of call recursion, except when you can establish small bounds on the depth, leads to stack overflow, and not the good kind of Stack Overflow that answers your questions for you. :-)
Recursion as an algorithmic concept, on the other hand, is very useful. It's key to working with any recursively-defined data formats (like HTML or XML, or a hierarchical filesystem) as well as for implementing important algorithms in searching, sorting, and (everyone's favorite) graphics rendering, among countless other fields.
There are are several languages that don't support loops (ie. for and while), and as a result when you need repeating behavior, you need to use recursion(I believe that J does not have loops). In many examples, recursion requires much less code. For example, I wrote an isPrime method, it took only two lines of code.
public static boolean isPrime(int n)
{
return n!=1&&isPrime(n,2);
}
public static boolean isPrime(int n,int c)
{
return c==n||n%c!=0&&isPrime(n,c+1);
}
The iterative solution would take much more code:
public static boolean isPrime(int n)
{
if(n==1) return false;
int c=2;
while(c!=n)
{
if(n%c==0) return false;
}
return true;
}
Another good example is when you are working with ListNodes, for example if you would like to check if all the elements in a ListNode are the same, a recursive solution would be much easier.
public static <E> boolean allSame(ListNode<E> list)
{
return list.getNext()==null||list.getValue().equals(list.getNext().getValue())&&allSame(list.getNext());
}
The iterative solution would look something like this:
public static <E> boolean allSame(ListNode<E> list)
{
while(list.getNext()!=null)
{
if(!list.getValue().equals(list)) return false;
list=list.getNext();
}
return true;
}
As you can see, in most cases recursive solutions are shorter than iterative solutions.
Most of you have probably bumped into a situation, where multiple things must be in check and in certain order before the application can proceed, for example in a very simple case of creating a listening socket (socket, bind, listen, accept etc.). There are at least two obvious ways (don't take this 100% verbatim):
if (1st_ok)
{
if (2nd_ok)
{
...
or
if (!1st_ok)
{
return;
}
if (!2nd_ok)
{
return;
}
...
Have you ever though of anything smarter, do you prefer one over the other of the above, or do you (if the language provides for it) use exceptions?
I prefer the second technique. The main problem with the first one is that it increases the nesting depth of the code, which is a significant issue when you've got a substantial number of preconditions/resource-allocs to check since the business part of the function ends up deeply buried behind a wall of conditions (and frequently loops too). In the second case, you can simplify the conceptual logic to "we've got here and everything's OK", which is much easier to work with. Keeping the normal case as straight-line as possible is just easier to grok, especially when doing maintenance coding.
It depends on the language - e.g. in C++ you might well use exceptions, while in C you might use one of several strategies:
if/else blocks
goto (one of the few cases where a single goto label for "exception" handling might be justified
use break within a do { ... } while (0) loop
Personally I don't like multiple return statements in a function - I prefer to have a common clean up block at the end of the function followed by a single return statement.
This tends to be a matter of style. Some people only like returning at the end of a procedure, others prefer to do it wherever needed.
I'm a fan of the second method, as it allows for clean and concise code as well as ease of adding documentation on what it's doing.
// Checking for llama integration
if (!1st_ok)
{
return;
}
// Llama found, loading spitting capacity
if (!2nd_ok)
{
return;
}
// Etc.
I prefer the second version.
In the normal case, all code between the checks executes sequentially, so I like to see them at the same level. Normally none of the if branches are executed, so I want them to be as unobtrusive as possible.
I use 2nd because I think It reads better and easier to follow the logic. Also they say exceptions should not be used for flow control, but for the exceptional and unexpected cases. Id like to see what pros say about this.
What about
if (1st_ok && 2nd_ok) { }
or if some work must be done, like in your example with sockets
if (1st_ok() && 2nd_ok()) { }
I avoid the first solution because of nesting.
I avoid the second solution because of corporate coding rules which forbid multiple return in a function body.
Of course coding rules also forbid goto.
My workaround is to use a local variable:
bool isFailed = false; // or whatever is available for bool/true/false
if (!check1) {
log_error();
try_recovery_action();
isFailed = true;
}
if (!isfailed) {
if (!check2) {
log_error();
try_recovery_action();
isFailed = true;
}
}
...
This is not as beautiful as I would like but it is the best I've found to conform to my constraints and to write a readable code.
For what it is worth, here are some of my thoughts and experiences on this question.
Personally, I tend to prefer the second case you outlined. I find it easier to follow (and debug) the code. That is, as the code progresses, it becomes "more correct". In my own experience, this has seemed to be the preferred method.
I don't know how common it is in the field, but I've also seen condition testing written as ...
error = foo1 ();
if ((error == OK) && test1)) {
error = foo2 ();
}
if ((error == OK) && (test2)) {
error = foo3 ();
}
...
return (error);
Although readable (always a plus in my books) and avoiding deep nesting, it always struck me as using a lot of unnecessary testing to achieve those ends.
The first method, I see used less frequently than the second. Of those times, the vast majority of the time was because there was no nice way around it. For the remaining few instances, it was justified on the basis of extracting a little more performance on the success case. The argument was that the processor would predict a forward branch as not taken (corresponding to the else clause). This depended upon several factors including, the architecture, compiler, language, need, .... Obviously most projects (and most aspects of the project) did not meet those requirements.
Hope this helps.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Should developers avoid using continue in C# or its equivalent in other languages to force the next iteration of a loop? Would arguments for or against overlap with arguments about Goto?
I think there should be more use of continue!
Too often I come across code like:
for (...)
{
if (!cond1)
{
if (!cond2)
{
... highly indented lines ...
}
}
}
instead of
for (...)
{
if (cond1 || cond2)
{
continue;
}
...
}
Use it to make the code more readable!
Is continue any more harmful than, say, break?
If anything, in the majority of cases where I encounter/use it, I find it makes code clearer and less spaghetti-like.
You can write good code with or without continue and you can write bad code with or without continue.
There probably is some overlap with arguments about goto, but as far as I'm concerned the use of continue is equivalent to using break statements (in loops) or return statement from anywhere in a method body - if used correctly it can simplify the code (less likely to contain bugs, easier to maintain).
There are not harmful keywords. There's only harmful uses of them.
Goto is not harmful per se, neither is continue. They need to be used carefully, that's all.
If continue is causing a problem with readability, then chances are you have other problems. For example, massive amounts of code inside a for loop. If you have to write large for loops, I would try to stick to using continue close to the top of the for loop. Otherwise, a continue buried deep in the middle of a for loop can easily be missed.
I like to use continue at the beginning of loops for handling simple if conditions.
To me it makes the code more readable since there is not extra nesting and you can see that I have explicitly dealt with these cases.
Is this the same reason that I would use a goto? Perhaps. I do use them for readability at times and to stop the nesting of code but I usually use them more for cleanup/error handling.
I'd say: "it depends".
If you have reasonably small loop code (where you can see the whole loop-code without scrolling) its usually ok to use a continue.
However, if the loops body is large (for example due to a big switch), and there is some followup code (say below the switch), you may easily introduce bugs by adding a continue and thus skipping over that code sometimes. I have encountered this in the heart of a bytecode interpreter, where some instrumentation code was sometimes not executed due to a continue in some case-branches.
This might be a somewhat artificially constructed case, but I generally try to avoid continue and use an if (but not nesting too deep as in the Rob's sample code).
I don't think continue could ever be as difficult as goto since continue never moves execution out of the code block that it is in.
If you are iterating through any kind of a result set, and performing operations on said results, for e.g within a for each, and if one particular result caused a problem, its rather useful in capturing an expected error (via try-catch), logging it, and moving on to the next result via continue. Continue is especially useful, imo, for unattended services that do jobs at odd hours, and one exception shouldn't affect the other x number of records.
As far as this programmer is concerned, Nested if/else considered harmful.
Using continue at the beginning of a loop to avoid iteration over unnecessary elements is not harmful and can be very useful, but using it in the middle of nested ifs and elses can turn the loop code into a complex maze, to understand and validate.
I think its usage avoidance is also the result of a semantic misunderstanding. People who does never see/write 'continue' keyword on their code, when seeing a code with continue can interpret it as "the continuation of the natural flow". If instead of continue we had next, for instance, I think more people would appreciate this valuable cursor feature.
goto can be used as a continue, but not the reverse.
You can "goto" anywhere, thus break flow control arbitrarily.
Thus continue, not nearly as harmful.
Others have hinted at it... but continue and break are enforced by the compiler and have their own associated rules. Goto has no such limitations, though the net effect might almost be the same, in some circumstances.
I do not consider continue or break to be harmful per se, though I'm sure either can be used poorly in a way that would make any sane programmer gag.
Continue is a really useful function in most languages, because it allows blocks of code to be skipped for certain conditions.
One alternative would be to uses boolean variables in if statements, but these would need to be reset after every use.
I'd say yes. To me, it just breaks the 'flow' of a fluidly-written piece of code.
Another argument could also be that if you stick to the basic keywords supported by most modern languages, then your program flow (if not the logic or code) could be ported to any other language. Having an unsupported keyword (ie, continue or goto) would break that.
It's really more of a personal preference, but I've never had to use it and don't really consider it an option when I'm writing new code. (same as goto.)
I believe the bottom line argument against continue is that it makes it harder to PROVE that the code is correct. This is prove in the mathematical sense. But it probably doesn't matter to you because no one has the resources to 'prove' a computer program that is significantly complex.
Enter the static-analysis tools. You may make things harder on them...
And the goto, that sounds like a nightmare for the same reasons but at any random place in code.
continue feels wrong to me. break gets you out of there, but continue seems just to be spaghetti.
On the other hand, you can emulate continue with break (at least in Java).
for (String str : strs) contLp: {
...
break contLp;
...
}
(This posting had an obvious bug in the above code for over a decade. That doesn't look good for break/continue.)
continue can be useful in some circumstances, but it still feels dirty to me. It might be time to introduce a new method.
for (char c : cs) {
final int i;
if ('0' <= c && c <= '9') {
i = c - '0';
} else if ('a' <= c && c <= 'z') {
i = c - 'a' + 10;
} else {
continue;
}
... use i ...
}
These uses should be very rare.