Successive success checks - language-agnostic

Most of you have probably bumped into a situation, where multiple things must be in check and in certain order before the application can proceed, for example in a very simple case of creating a listening socket (socket, bind, listen, accept etc.). There are at least two obvious ways (don't take this 100% verbatim):
if (1st_ok)
{
if (2nd_ok)
{
...
or
if (!1st_ok)
{
return;
}
if (!2nd_ok)
{
return;
}
...
Have you ever though of anything smarter, do you prefer one over the other of the above, or do you (if the language provides for it) use exceptions?

I prefer the second technique. The main problem with the first one is that it increases the nesting depth of the code, which is a significant issue when you've got a substantial number of preconditions/resource-allocs to check since the business part of the function ends up deeply buried behind a wall of conditions (and frequently loops too). In the second case, you can simplify the conceptual logic to "we've got here and everything's OK", which is much easier to work with. Keeping the normal case as straight-line as possible is just easier to grok, especially when doing maintenance coding.

It depends on the language - e.g. in C++ you might well use exceptions, while in C you might use one of several strategies:
if/else blocks
goto (one of the few cases where a single goto label for "exception" handling might be justified
use break within a do { ... } while (0) loop
Personally I don't like multiple return statements in a function - I prefer to have a common clean up block at the end of the function followed by a single return statement.

This tends to be a matter of style. Some people only like returning at the end of a procedure, others prefer to do it wherever needed.
I'm a fan of the second method, as it allows for clean and concise code as well as ease of adding documentation on what it's doing.
// Checking for llama integration
if (!1st_ok)
{
return;
}
// Llama found, loading spitting capacity
if (!2nd_ok)
{
return;
}
// Etc.

I prefer the second version.
In the normal case, all code between the checks executes sequentially, so I like to see them at the same level. Normally none of the if branches are executed, so I want them to be as unobtrusive as possible.

I use 2nd because I think It reads better and easier to follow the logic. Also they say exceptions should not be used for flow control, but for the exceptional and unexpected cases. Id like to see what pros say about this.

What about
if (1st_ok && 2nd_ok) { }
or if some work must be done, like in your example with sockets
if (1st_ok() && 2nd_ok()) { }

I avoid the first solution because of nesting.
I avoid the second solution because of corporate coding rules which forbid multiple return in a function body.
Of course coding rules also forbid goto.
My workaround is to use a local variable:
bool isFailed = false; // or whatever is available for bool/true/false
if (!check1) {
log_error();
try_recovery_action();
isFailed = true;
}
if (!isfailed) {
if (!check2) {
log_error();
try_recovery_action();
isFailed = true;
}
}
...
This is not as beautiful as I would like but it is the best I've found to conform to my constraints and to write a readable code.

For what it is worth, here are some of my thoughts and experiences on this question.
Personally, I tend to prefer the second case you outlined. I find it easier to follow (and debug) the code. That is, as the code progresses, it becomes "more correct". In my own experience, this has seemed to be the preferred method.
I don't know how common it is in the field, but I've also seen condition testing written as ...
error = foo1 ();
if ((error == OK) && test1)) {
error = foo2 ();
}
if ((error == OK) && (test2)) {
error = foo3 ();
}
...
return (error);
Although readable (always a plus in my books) and avoiding deep nesting, it always struck me as using a lot of unnecessary testing to achieve those ends.
The first method, I see used less frequently than the second. Of those times, the vast majority of the time was because there was no nice way around it. For the remaining few instances, it was justified on the basis of extracting a little more performance on the success case. The argument was that the processor would predict a forward branch as not taken (corresponding to the else clause). This depended upon several factors including, the architecture, compiler, language, need, .... Obviously most projects (and most aspects of the project) did not meet those requirements.
Hope this helps.

Related

Nesting Asynchronous Promises in ActionScript

I have a situation where I need to perform dependent asynchronous operations. For example, check the database for data, if there is data, perform a database write (insert/update), if not continue without doing anything. I have written myself a promise based database API using promise-as3. Any database operation returns a promise that is resolved with the data of a read query, or with the Result object(s) of a write query. I do the following to nest promises and create one point of resolution or rejection for the entire 'initialize' operation.
public function initializeTable():Promise
{
var dfd:Deferred = new Deferred();
select("SELECT * FROM table").then(tableDefaults).then(resolveDeferred(dfd)).otherwise(errorHandler(dfd));
return dfd.promise;
}
public function tableDefaults(data:Array):Promise
{
if(!data || !data.length)
{
//defaultParams is an Object of table default fields/values.
return insert("table", defaultParams);
} else
{
var resolved:Deferred = new Deferred();
resolved.resolve(null);
return resolved.promise;
}
}
public function resolveDeferred(deferred:Deferred):Function
{
return function resolver(value:*=null):void
{
deferred.resolve(value);
}
}
public function rejectDeferred(deferred:Deferred):Function
{
return function rejector(reason:*=null):void
{
deferred.reject(reason);
}
}
My main questions:
Are there any performance issues that will arise from this? Memory leaks etc.? I've read that function variables perform poorly, but I don't see another way to nest operations so logically.
Would it be better to have say a global resolved instance that is created and resolved only once, but returned whenever we need an 'empty' promise?
EDIT:
I'm removing question 3 (Is there a better way to do this??), as it seems to be leading to opinions on the nature of promises in asynchronous programming. I meant better in the scope of promises, not asynchronicity in general. Assume you have to use this promise based API for the sake of the question.
I usually don't write those kind of opinion based answers, but here it's pretty important. Promises in AS3 = THE ROOTS OF ALL EVIL :) And I'll explain you why..
First, as BotMaster said - it's weakly typed. What this means is that you don't use AS3 properly. And the only reason this is possible is because of backwards compatibility. The true here is, that Adobe have spent thousands of times so that they can turn AS3 into strongly type OOP language. Don't stray away from that.
The second point is that Promises, at first place, are created so that poor developers can actually start doing some job in JavaScript. This is not a new design pattern or something. Actually, it has no real benefits if you know how to structure your code properly. The thing that Promises help the most, is avoiding the so called Wall of Hell. But there are other ways to fix this in a natural manner (the very very basic thing is not to write functions within functions, but on the same level, and simply check the passed result).
The most important here is the nature of Promises. Very few people know what they actually do behind the scenes. Because of the nature of JavaScript (and ECMA script at all), there is no real way to tell if a function completed properly or not. If you return false / null / undefined - they are all regular return values. The only way they could actually say "this operation failed" is by throwing an error. So every promisified method, can potentially throw an error. And each error must be handled, or otherwise your code can stop working properly. What this means, is that every single action inside Promise is within try-catch block! Every time you do absolutely basic stuff, you wrap it in try-catch. Even this block of yours:
else
{
var resolved:Deferred = new Deferred();
resolved.resolve(null);
return resolved.promise;
}
In a "regular" way, you would simply use else { return null }. But now, you create tons of objects, resolvers, rejectors, and finally - you try-catch this block.
I cannot stress more on this, but I think you are getting the point. Try-catch is extremely slow! I understand that this is not a big problem in such a simple case like the one I just mentioned, but imagine you are doing it more and on more heavy methods. You are just doing extremely slow operations, for what? Because you can write lame code and just enjoy it..
The last thing to say - there are plenty of ways to use asynchronous operations and make them work one after another. Just by googling as3 function queue I found a few. Not to say that the event-based system is so flexible, and there are even alternatives to it (using callbacks). You've got it all in your hands, and you turn to something that is created because lacking proper ways to do it otherwise.
So my sincere advise as a person worked with Flash for a decade, doing casino games in big teams, would be - don't ever try using promises in AS3. Good luck!
var dfd:Deferred = new Deferred();
select("SELECT * FROM table").then(tableDefaults).then(resolveDeferred(dfd)).otherwise(errorHandler(dfd));
return dfd.promise;
This is the The Forgotten Promise antipattern. It can instead be written as:
return select("SELECT * FROM table").then(tableDefaults);
This removes the need for the resolveDeferred and rejectDeferred functions.
var resolved:Deferred = new Deferred();
resolved.resolve(null);
return resolved.promise;
I would either extract this to another function, or use Promise.when(null). A global instance wouldn't work because it would mean than the result handlers from one call can be called for a different one.

Conditional and Loop return in the middle of the code is that correct? [duplicate]

This question already has answers here:
Should a function have only one return statement?
(50 answers)
Closed 9 years ago.
Sometimes me have indicated that you can not put a return in the middle of a conditional or a loop, because it breaks the process. However, now have indicated to me that if you can do, and is better. I'm confused. Usually would happen in a function
Can you put a return? Is not it? Why? Or doesn't it make any difference?
Example:
if (i == 0)
{
//other code
return true;
}
else
{
//other code
return false;
}
or
if (i == 0)
{
//other code
b= true;
}
else
{
//other code
b= false;
}
return b;
Your two examples are basically equivalent in functionality, and either will work. In fact, an optimizing compiler may easily turn your second example into your first.
Most programmers would likely prefer the first as the intent is clearer.
It's better to have a single return at the bottom. That way, you have only one point of entry and one point of exit. It is much easier to debug code when you don't have to worry about where it will exit. This is not big deal with very short methods, but for long ones that go on for a few hundred lines, it is much cleaner.
I don't see any practical implication of returning in the middle of a loop. If you hear people saying you shouldn't, then it must be on the basis of readability of the code. If you have multiple exit points from the function, it might make some code ugly. Also, most of the time, you have to do some cleanup before exiting the routine. So, generally programmers tend to keep the cleanup routine at one place and always exit through that path. if you have multiple exit points then you have to add the clean up routines in all these places, that makes code duplication and again ruin the readability of the code. I have seen codes with returns spread all over the places and eventually failing to do the clean up properly and causing memory leaks.
The bigger problem is, most of the time the code you write now lives for a long time and the maintainer keeps changing, and at some point people doesn't understand the whole intent of all the lines of code present. that will add in to all these confusion.
All that said, I have seen a lot very beautifully written code with returns in the middle of loops.
This is a choice of style rather than it being a rule or a matter of performance. The second code example follows the "single entry, single exit" approach, where the code within the function only enters from the top and only exits from the bottom. The idea behind this is that this is more "safe" and easier to follow the code flow. The safety comes into play when you have manually set dynamic storage: with a single point of return, you can ensure that you free all the memory. Of course, languages like java and C# do dynamic storage for you, so this isn't really an issue. Also, if you're exiting multiple times in the middle of a function (particularly if it's very long), it might be hard to keep track of what causes the function to return.
However, choosing to exit only at the bottom of a function can create its own problems, as you may sometimes need to keep track of more state by setting and checking flags.
As for your original question, it certainly does not break anything in modern programming languages; it's all up to you. Go with the way you find easier to follow.

All languages - Procedural Efficiency

Consider these to options
if(successful)
{
if(condition)
{
//do something
}
if(condition)
{
//do something
}
...
}
or
if(successful)&&(condition)
{
//do something
}
if(successful)&&(condition)
{
//do something
}
...
Imagine there 100 if statements.
Is there any difference in efficiency?
Thanks in advance.
There are two correct answers for this. Everything else is nonsense.
Stop worrying about micro-optimizations like this unless you have proven the need for them. This can only be done by measuring and confirming that the code you're looking at is a bottleneck. (Hint: your intuitions in matters like this are almost, but not quite, always wrong.)
If you have successfully proven that your code is a bottleneck, try both ways and measure the results. Nobody here is going to be able to answer this question for you unless they happen to have identical hardware running on an identical operating system and are compiling with an identical compiler.
Make your code correct first. Then measure it for performance. Then optimize if needed. Everything else is nonsense.
That all depends how costly it is to evaluate the successful expression.
You should also note that the two versions are not semantically equivalent as the evaluation of the if-expression might have side-effects1.
If you are actually facing performance issues then measure, don't guess. Measuring will be the only way to see what the performance really is.
1To explain a question from the comments, here is a simple example where you would get different behavior:
The method CreateProcess has the side-effect of starting a new process and indicates the successful creation by returning true:
bool CreateProcess(string filename, out handle) { ... }
if (CreateProcess("program.exe", out handle))
{
if (someCondition)
{
handle.SomeMethod(...);
}
if (someOtherCondition)
{
handle.SomeOtherMethod(...);
}
}
This is quite different from the following:
if (CreateProcess("program.exe", out handle) && someCondition)
{
handle.SomeMethod(...);
}
if (CreateProcess("program.exe", out handle) && someOtherCondition)
{
handle.SomeOtherMethod(...);
}
Both are O(1). Anything else is depends on the language/compiler/optimizer you use.
Let me start out with stating that I completely agree with JMcO
However I find it can be interesting to think about the differences. (Might be because I working on a compiler, where the statement about optimizing (output) has to be done upfront and not based on measurements but assumptions on/knowledge of the general use of the compiler)
There's no one answer to your question, there's simply to many aspects that might affect the performance.
is success a value or a method call
Does the compiler create short circuiting for && operations (bailing out if the lhs is false) or does it always evaluate right and left hand side of the &&
What kind of branch prediction is the processor using?
How's the compiler treating compounded conditions vs nested? The compiler might produce the same binaries for the two examples you're providing though it most like won't (when the compiler can verify that the outer condition has no side effects)
What kinds of optimizations are the compiler performing? Could some of them radically change the binaries (e.g. skip the condition if it can be found to be constant or at least known before hand?)
the list could be elongated but my point is eventhough it can be interesting to think about what could affect the performance with out knowing a lot about both build and execution environment it's basically impossible to predict performance

Programming style: should you return early if a guard condition is not satisfied?

One thing I've sometimes wondered is which is the better style out of the two shown below (if any)? Is it better to return immediately if a guard condition hasn't been satisfied, or should you only do the other stuff if the guard condition is satisfied?
For the sake of argument, please assume that the guard condition is a simple test that returns a boolean, such as checking to see if an element is in a collection, rather than something that might affect the control flow by throwing an exception. Also assume that methods/functions are short enough not to require editor scrolling.
// Style 1
public SomeType aMethod() {
SomeType result = null;
if (!guardCondition()) {
return result;
}
doStuffToResult(result);
doMoreStuffToResult(result);
return result;
}
// Style 2
public SomeType aMethod() {
SomeType result = null;
if (guardCondition()) {
doStuffToResult(result);
doMoreStuffToResult(result);
}
return result;
}
I prefer the first style, except that I wouldn't create a variable when there is no need for it. I'd do this:
// Style 3
public SomeType aMethod() {
if (!guardCondition()) {
return null;
}
SomeType result = new SomeType();
doStuffToResult(result);
doMoreStuffToResult(result);
return result;
}
Having been trained in Jackson Structured Programming in the late '80s, my ingrained philosophy was always "a function should have a single entry-point and a single exit-point"; this meant I wrote code according to Style 2.
In the last few years I have come to realise that code written in this style is often overcomplex and hard to read/maintain, and I have switched to Style 1.
Who says old dogs can't learn new tricks? ;)
Style 1 is what the Linux kernel indirectly recommends.
From https://www.kernel.org/doc/Documentation/process/coding-style.rst, chapter 1:
Now, some people will claim that having 8-character indentations makes
the code move too far to the right, and makes it hard to read on a
80-character terminal screen. The answer to that is that if you need
more than 3 levels of indentation, you're screwed anyway, and should fix
your program.
Style 2 adds levels of indentation, ergo, it is discouraged.
Personally, I like style 1 as well. Style 2 makes it harder to match up closing braces in functions that have several guard tests.
I don't know if guard is the right word here. Normally an unsatisfied guard results in an exception or assertion.
But beside this I'd go with style 1, because it keeps the code cleaner in my opinion. You have a simple example with only one condition. But what happens with many conditions and style 2? It leads to a lot of nested ifs or huge if-conditions (with || , &&). I think it is better to return from a method as soon as you know that you can.
But this is certainly very subjective ^^
Martin Fowler refers to this refactoring as :
"Replace Nested Conditional with Guard Clauses"
If/else statements also brings cyclomatic complexity. Hence harder to test cases. In order to test all the if/else blocks you might need to input lots of options.
Where as if there are any guard clauses, you can test them first, and deal with the real logic inside the if/else clauses in a clearer fashion.
If you dig through the .net-Framework using .net-Reflector you will see the .net programmers use style 1 (or maybe style 3 already mentioned by unbeli).
The reasons are already mentioned by the answers above. and maybe one other reason is to make the code better readable, concise and clear.
the most thing this style is used is when checking the input parameters, you always have to do this if you program a kind of frawework/library/dll.
first check all input parameters than work with them.
It sometimes depends on the language and what kinds of "resources" that you are using (e.g. open file handles).
In C, Style 2 is definitely safer and more convenient because a function has to close and/or release any resources that it obtained during execution. This includes allocated memory blocks, file handles, handles to operating system resources such as threads or drawing contexts, locks on mutexes, and any number of other things. Delaying the return until the very end or otherwise restricting the number of exits from a function allows the programmer to more easily ensure that s/he properly cleans up, helping to prevent memory leaks, handle leaks, deadlock, and other problems.
In C++ using RAII-style programming, both styles are equally safe, so you can pick one that is more convenient. Personally I use Style 1 with RAII-style C++. C++ without RAII is like C, so, again, Style 2 is probably better in that case.
In languages like Java with garbage collection, the runtime helps smooth over the differences between the two styles because it cleans up after itself. However, there can be subtle issues with these languages, too, if you don't explicitly "close" some types of objects. For example, if you construct a new java.io.FileOutputStream and do not close it before returning, then the associated operating system handle will remain open until the runtime garbage collects the FileOutputStream instance that has fallen out of scope. This could mean that another process or thread that needs to open the file for writing may be unable to until the FileOutputStream instance is collected.
Although it goes against best practices that I have been taught I find it much better to reduce the nesting of if statements when I have a condition such as this. I think it is much easier to read and although it exits in more than one place it is still very easy to debug.
I would say that Style1 became more used because is the best practice if you combine it with small methods.
Style2 look a better solution when you have big methods. When you have them ... you have some common code that you want to execute no matter how you exit. But the proper solution is not to force a single exit point but to make the methods smaller.
For example if you want to extract a sequence of code from a big method, and this method has two exit points you start to have problems, is hard to do it automatically. When i have a big method written in style1 i usually transform it in style2, then i extract methods then in each of them i should have Style1 code.
So Style1 is best but is compatible with small methods.
Style2 is not so good but is recommended if you have big methods that you don't want, have time to split.
I prefer to use method #1 myself, it is logically easier to read and also logically more similar to what we are trying to do. (if something bad happens, exit function NOW, do not pass go, do not collect $200)
Furthermore, most of the time you would want to return a value that is not a logically possible result (ie -1) to indicate to the user who called the function that the function failed to execute properly and to take appropriate action. This lends itself better to method #1 as well.
I would say "It depends on..."
In situations where I have to perform a cleanup sequence with more than 2 or 3 lines before leaving a function/method I would prefer style 2 because the cleanup sequence has to be written and modified only once. That means maintainability is easier.
In all other cases I would prefer style 1.
Number 1 is typically the easy, lazy and sloppy way. Number 2 expresses the logic cleanly. What others have pointed out is that yes it can become cumbersome. This tendency though has an important benefit. Style #1 can hide that your function is probably doing too much. It doesn't visually demonstrate the complexity of what's going on very well. I.e. it prevents the code from saying to you "hey this is getting a bit too complex for this one function". It also makes it a bit easier for other developers that don't know your code to miss those returns sprinkled here and there, at first glance anyway.
So let the code speak. When you see long conditions appearing or nested if statements it is saying that maybe it would be better to break this stuff up into multiple functions or that it needs to be rewritten more elegantly.

Spartan Programming

I really enjoyed Jeff's post on Spartan Programming. I agree that code like that is a joy to read. Unfortunately, I'm not so sure it would necessarily be a joy to work with.
For years I have read about and adhered to the "one-expression-per-line" practice. I have fought the good fight and held my ground when many programming books countered this advice with example code like:
while (bytes = read(...))
{
...
}
while (GetMessage(...))
{
...
}
Recently, I've advocated one expression per line for more practical reasons - debugging and production support. Getting a log file from production that claims a NullPointer exception at "line 65" which reads:
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
is frustrating and entirely avoidable. Short of grabbing an expert with the code that can choose the "most likely" object that was null ... this is a real practical pain.
One expression per line also helps out quite a bit while stepping through code. I practice this with the assumption that most modern compilers can optimize away all the superfluous temp objects I've just created ...
I try to be neat - but cluttering my code with explicit objects sure feels laborious at times. It does not generally make the code easier to browse - but it really has come in handy when tracing things down in production or stepping through my or someone else's code.
What style do you advocate and can you rationalize it in a practical sense?
In The Pragmatic Programmer Hunt and Thomas talk about a study they term the Law of Demeter and it focuses on the coupling of functions to modules other than there own. By allowing a function to never reach a 3rd level in it's coupling you significantly reduce the number of errors and increase the maintainability of the code.
So:
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
Is close to a felony because we are 4 objects down the rat hole. That means to change something in one of those objects I have to know that you called this whole stack right here in this very method. What a pain.
Better:
Account.getUser();
Note this runs counter to the expressive forms of programming that are now really popular with mocking software. The trade off there is that you have a tightly coupled interface anyway, and the expressive syntax just makes it easier to use.
I think the ideal solution is to find a balance between the extremes. There is no way to write a rule that will fit in all situations; it comes with experience. Declaring each intermediate variable on its own line will make reading the code more difficult, which will also contribute to the difficulty in maintenance. By the same token, debugging is much more difficult if you inline the intermediate values.
The 'sweet spot' is somewhere in the middle.
One expression per line.
There is no reason to obfuscate your code. The extra time you take typing the few extra terms, you save in debug time.
I tend to err on the side of readability, not necessarily debuggability. The examples you gave should definitely be avoided, but I feel that judicious use of multiple expressions can make the code more concise and comprehensible.
I'm usually in the "shorter is better" camp. Your example is good:
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
I would cringe if I saw that over four lines instead of one--I don't think it'd make it easier to read or understand. The way you presented it here, it's clear that you're digging for a single object. This isn't better:
obja State = session.getState();
objb Account = State.getAccount();
objc AccountNumber = Account.getAccountNumber();
ObjectA a = getTheUser(AccountNumber);
This is a compromise:
objb Account = session.getState().getAccount();
ObjectA a = getTheUser(Account.getAccountNumber());
but I still prefer the single line expression. Here's an anecdotal reason: it's difficult for me to reread and error-check the 4-liner right now for dumb typos; the single line doesn't have this problem because there are simply fewer characters.
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
This is a bad example, probably because you just wrote something from the top of your head.
You are assigning, to variable named a of type ObjectA, the return value of a function named getTheUser.
So let's assume you wrote this instead:
User u = getTheUser(session.getState().getAccount().getAccountNumber());
I would break this expression like so:
Account acc = session.getState().getAccount();
User user = getTheUser( acc.getAccountNumber() );
My reasoning is: how would I think about what I am doing with this code?
I would probably think: "first I need to get the account from the session and then I get the user using that account's number".
The code should read the way you think. Variables should refer to the main entities involved; not so much to their properties (so I wouldn't store the account number in a variable).
A second factor to have in mind is: will I ever need to refer to this entity again in this context?
If, say, I'm pulling more stuff out of the session state, I would introduce SessionState state = session.getState().
This all seems obvious, but I'm afraid I have some difficulty putting in words why it makes sense, not being a native English speaker and all.
Maintainability, and with it, readability, is king. Luckily, shorter very often means more readable.
Here are a few tips I enjoy using to slice and dice code:
Variable names: how would you describe this variable to someone else on your team? You would not say "the numberOfLinesSoFar integer". You would say "numLines" or something similar - comprehensible and short. Don't pretend like the maintainer doesn't know the code at all, but make sure you yourself could figure out what the variable is, even if you forgot your own act of writing it. Yes, this is kind of obvious, but it's worth more effort than I see many coders put into it, so I list it first.
Control flow: Avoid lots of closing clauses at once (a series of }'s in C++). Usually when you see this, there's a way to avoid it. A common case is something like
:
if (things_are_ok) {
// Do a lot of stuff.
return true;
} else {
ExpressDismay(error_str);
return false;
}
can be replaced by
if (!things_are_ok) return ExpressDismay(error_str);
// Do a lot of stuff.
return true;
if we can get ExpressDismay (or a wrapper thereof) to return false.
Another case is:
Loop iterations: the more standard, the better. For shorter loops, it's good to use one-character iterators when the variable is never used except as an index into a single object.
The particular case I would argue here is against the "right" way to use an STL container:
for (vector<string>::iterator a_str = my_vec.begin(); a_str != my_vec.end(); ++a_str)
is a lot wordier, and requires overloaded pointer operators *a_str or a_str->size() in the loop. For containers that have fast random access, the following is a lot easier to read:
for (int i = 0; i < my_vec.size(); ++i)
with references to my_vec[i] in the loop body, which won't confuse anyone.
Finally, I often see coders take pride in their line number counts. But it's not the line numbers that count! I'm not sure of the best way to implement this, but if you have any influence over your coding culture, I'd try to shift the reward toward those with compact classes :)
Good explanation. I think this is version of the general Divide and Conquer mentality.