Is it considered bad form to execute a function within a conditional statement? - language-agnostic

Consider a situation in which you need to call successive routines and stop as soon as one returns a value that could be evaluated as positive (true, object, 1, str(1)).
It's very tempting to do this:
if (fruit = getOrange())
elseif (fruit = getApple())
elseif (fruit = getMango())
else fruit = new Banana();
return fruit;
I like it, but this isn't a very recurrent style in what can be considered professional production code. One is likely to rather see more elaborate code like:
fruit = getOrange();
if(!fruit){
fruit = getApple();
if(!fruit){
fruit = getMango();
if(!fruit){
fruit = new Banana();
}
}
}
return fruit;
According to the dogma on basic structures, is the previous form acceptable? Would you recommend it?
Edit:
I apologize to those who assumed that these functions were meant to be factories or constructors. They're not, they're just placeholders. The question is more about syntax than "factorying". These functions could as well be lambda.

If you want a succinct syntax, several languages allow using the "logical or" for this purpose (C# explicitly provides a coalescing operator, because nulls are not falsy).
Python:
fruit = ( getOrange() or
getApple() or
getMango() or
Banana() )
C#:
fruit = getOrange() ??
getApple() ??
getMango() ??
new Banana();

I can think of two alternatives.
The first is only allowable in languages like yours (PHP?), where single = in a conditional is ok.
if ( (fruit = getOrange()) != null)
elseif ( (fruit = getApple()) != null)
elseif ( (fruit = getMango()) != null)
else fruit = new Banana();
Makes it clear that you are doing a comparison and that the single = are not a mistake.
fruit = getOrange();
if(!fruit) fruit = getApple();
if(!fruit) fruit = getMango();
if(!fruit) fruit = new Banana();
Just like your second example, but gets rid of the ugly extra nesting.

In a strongly-typed language that doesn't equate 0/null to false and non-0/non-null to true, I would say that it's probably safe, but marginally less readable in the general case, where your method names and number of parameters may be larger. I would personally avoid it, except for certain standard idioms, in cases where 0/null equate to false and non-0/non-null to true simply because of the potential danger of confusing assignment with equality checking in reading the code. Some idioms in weakly-typed languages, like C, are so pervasive that it doesn't make sense to avoid them, .e.g,
while ((line = getline()) != null) {
...
}

The problem, as I see it, is not the structure, but the driving rules. Why does getOrange come before getApple, etc?
You are probably more likely to see something more data-driven:
enum FruitEnum
{
Orange, Apple, Mango, Banana
}
and separately,
List<FruitEnum> orderedFruit = getOrderedFruit();
int i = 0;
FruitObj selectedFruit;
while(selectedFruit == null && i <= orderedFruit.Count)
{
fruit = FruitFactory.Get(orderedFruit[i++]);
}
if(fruit == null)
{
throw new FruitNotFoundException();
}
That said, to simplify your code, you can use a coalesce operator:
fruit = getOrange() ?? getApple() ?? getMango() ?? new Banana();

In C or C++, you could write:
return (fruit = getOrange()) ? fruit :
(fruit = getApple()) ? fruit :
(fruit = getMango()) ? fruit :
new Banana();
The reason to avoid both this and your first version isn't "dogma on basic structures", it's that assignment on its own in a condition is confusing. Not all languages support it, for one thing. For another it's easily misread as ==, or the reader might be uncertain whether you really meant it, or perhaps intended ==. Adding != 0 to each condition gets quite dense and wordy.
GCC has an extension to allow:
return getOrange() ? : getApple() ? : getMango() ? : new Banana();
The same thing can often be achieved with || or or (but not in C or C++).
Another possibility is:
do {
fruit = getOrange();
if (fruit) break;
fruit = getApple();
if (fruit) break;
fruit = getMango();
if (fruit) break;
fruit = new Banana();
} while (false);
This goes even better in a language where you can break out of a basic block, which you can with last in Perl, since you can dispense with the do / while(false). But probably only assembly programmers will actually like it.

To answer your question directly: it is usually bad form to have side-effects in conditional statement.
As a work around, you can store your fruit constructors in an array and find the first constructor which returns non-null (pseudocode):
let constructors = [getOrange; getApple; getMango; fun () -> new Banana()]
foreach constructor in constructors
let fruit = constructor()
if fruit != null
return fruit
Its like a null-coalesce operator, but more generalized. In C#, you'd probably use linq as follows:
var fruit = constructors
.Select(constructor => constructor())
.Filter(x => x != null)
.First();
At least this way you can pass your constructors around like a factory class, instead of hard-coding them with the null-coalesce operator.

I don't think anybody's mentioned yet that the first version can be kind of a pain to step through in a debugger. The difference in readability is debatable, but In general, I avoid assignments and function calls in conditionals to make it easier to trace through the execution, when necessary (even if I never need to debug it, someone else may need to modify the code later).

Related

What is the technical term for a programming language's operator evaluation order?

Several procedures such as array destructuring in JavaScript or collection manipulation in Python have prompted me to evaluate an object's property or method to check if it even exists before proceeding, often resulting in the following pattern:
var value = collection.length
if value != null {
if value == targetValue {
/* do something */
}
}
In an attempt to make "cleaner" code I want to do something like:
if value != null && value == targetValue {
/* do something */
}
or with a ternary operator:
var value = collection.length != null ? collection.length : 0
However, I'm never sure if the compiler will stop evaluating as soon as it resolves the first comparison to null, or if it'll keep going and produce an error. I can of course do small unit tests to find out but I'd prefer if I knew the right term to look up in any language's documentation. What is this term, or is it perhaps the same in all languages?
This is known as Short Circuit Evaluation . It's quite consistent between languages.
In most languages, && will only evaluate the second argument if the first was true, and || will only evaluate its second if the first was false.

Using predicates in Alloy

I am trying to use two predicates (say, methodsWiThSameParameters and methodsWiThSameReturn) from another one (i.e. checkOverriding) but i receive the following error: "There are no commands to execute". Any clues?
I also tried to use functions but with no success, either due to syntax or to functions do not return boolean values.
They are part of a java metamodel specified in Alloy, as i commented in some earlier questions.
pred checkOverriding[]{
//check accessibility of methods involved in overriding
no c1, c2: Class {
c1=c2.^extend
some m1, m2:Method |
m1 in c1.methods && m2 in c2.methods && m1.id = m2.id
&& methodsWiThSameParameters[m1, m2] && methodsWiThSameReturn[m1, m2] &&
( (m1.acc = protected && (m2.acc = private_ || #(m2.acc) = 0 )) ||
(m1.acc = public && (m2.acc != public || #(m2.acc) = 0 )) ||
(#(m1.acc) = 0 && m2.acc != private_ )
)
}
}
pred methodsWiThSameParameters [first,second:Method]{
m1.param=m2.param || (#(m1.param)=0 && #(m2.param)=0)
}
pred methodsWiThSameReturn [first, second:Method]{
m1.return=m2.return || (#(m1.return)=0 && #(m2.return)=0)
}
Thank you for your response, mr C. M. Sperberg-McQueen, but i think i was not clear enough in my question.
My predicate, say checkOverriding, is being called from a fact like this:
fact chackJavaWellFormednessRules{
checkOverriding[]
}
Thus, i continue not understanding the error: "There are no commands to execute" .
You've defined predicates; they have a purely declarative semantics and they will be true in some subset of instances of the model and false in the complementary subset.
If you want the Analyzer to do anything, you need to give it an instruction; the instruction to search for an instance of a predicate is run. So you'll want to say something like
run methodsWithSameParameters for 3
or
run methodsWithSameParameters for 5
run methodsWithSameReturn for 5
Note that you can have more than one instruction in an Alloy model; the Analyzer lets you tell it which to execute.
[Addendum]
The Alloy Analyzer regards the keywords run and check (and only them) as 'commands'. From your description, it sounds very much as if you don't have any occurrences of those keywords in the model.
If all you want to do is to see some instances of the Alloy model (to verify that the model is not self-contradictory), then the simplest way is to add something like the following to the model:
pred show {}
run show for 3
Or, if you already have a named predicate, you could simply add a run command for that predicate:
run checkOverriding
But without a clause in the model that begins with either run or check, you do not have a 'command' in the model.
You say that you have defined a predicate (checkOverriding) and then specified in a fact that that predicate is always satisfied. This amounts to saying that the predicate checkOverriding is always true (and might just as well be done by making checkOverriding a fact instead of a predicate), but it has a purely declarative meaning, and it does not count as a "command". If you want Alloy to find instances of a predicate, you must use the run command; if you want Alloy to find counter-examples for an assertion, you must use the check command.

Programming style: Should you check for null in functions or out of functions?

When you call a function with a object, should you check for null in the function, before calling the function or both? What is better programming practice?
Something like this
Test a = getTest();
if (a != null) {
myFunc(a);
}
def myFunc(x):
print x.val();
or
Test a = new Test();
myFunc(a);
def myFunc(x):
if (x != null) {
print x.val();
}
or
Test a = new Test();
if (a != null) {
myFunc(a);
}
def myFunc(x):
if (x != null) {
print x.val();
}
I can see why putting the null check in the function is good, because then you don't have to check everywhere, but sometimes u need to check before calling the function, so then it feels redundant to check twice...
Can anyone explain this?
I think it depends on the intended use and/or distribution of the code. This is really based on opinions but I agree with Uncle Bob's opinion on "Defensive programming". If it's a library for your use or your team's use, you should avoid defensive programming, after all you trust your coworkers to not pass null into a function right?
If however you're writing a public API which may be used by anyone, you should make the proper checks, especially where passing null could cause a crash.
Defensive programming, in non-public APIs, is a smell, and a symptom,
of teams that don't do TDD.
#unclebobmartin

Creating complex XPQuery - LINQ to SQL with nested lists

any hint on what's wrong with the below query?
return new ItemPricesViewModel()
{
Source = (from o in XpoSession.Query<PRICE>()
select new ItemPriceViewModel()
{
ID = o.ITEM_ID.ITEM_ID,
ItemCod = o.ITEM_ID.ITEM_COD,
ItemModifier = o.ITEM_MODIFIER_ID.ITEM_MODIFIER_COD,
ItemName = o.ITEM_ID.ITEM_COD,
ItemID = o.ITEM_ID.ITEM_ID,
ItemModifierID = o.ITEM_MODIFIER_ID.ITEM_MODIFIER_ID,
ItemPrices = (from d in o
where d.ITEM_ID.ITEM_ID == o.ITEM_ID.ITEM_ID && d.ITEM_MODIFIER_ID.ITEM_MODIFIER_ID == o.ITEM_MODIFIER_ID.ITEM_MODIFIER_ID
select new Price()
{
ID = o.PRICE_ID,
PriceList = o.PRICELIST_ID.PRICELIST_,
Price = o.PRICE_
}).ToList()
}).ToList()
};
o in subquery is in read and I got the message "Could not find an implementation of the query pattern for source type . 'Where' not found."
I would like to have distinct ItemID, ItemModifier: should I create a custom IEqualityComparer to do it?
Thank you!
It seems like XPO it's not able to respond to this scenario. For reference this is what you could do with DbContext.
It sounds like maybe you want a GroupBy. Try something like this.
var result = dbContext.Prices
.GroupBy(p => new {p.ItemName, p.ItemTypeName)
.Select(g => new Item
{
ItemName = g.Key.ItemName,
ItemTypeName = g.Key.ItemTypeName,
Prices = g.Select(p => new Price
{
Price = p.Price
}
).ToList()
})
.Skip(x)
.Take(y)
.ToList();
Probable cause
In general, XPO does not support "free joins" in most of the cases. It was explicitelly written somewhere in their knowledgebase or Q/A site. If I hit that article again, I'll include a link to it.
In your original code example, you were trying to perform a "free join" in the INNER query. The 'WHERE' clause was doing a join-by-key, probably navigational, but also it contained an extra filter by "modifier" which probably is not a part of the definition of the relation.
Moreover, the query tried to reuse the IQueryable<PRICE> o in the inner query - what actually seems somewhat supported by XPO - but if you ever add any prefiltering ('where') to the toplevel 'o', it would have high odds of breaking again.
The docs state that XPO supports only navigational joins, along paths formed by properties and/or xpcollections defined in your XPObjects. This applies to XPO as whole, so XPQuery too. All other kinds of joins are called "free joins" and either:
are silently emulated by XPO by fetching related objects, extracting key values from them and rewriting the query into a multiple roundtrips with a series of partial queries that fetch full objects with WHERE-id-IN-(#p0,#p1,#p2,...) - but this happens only in the some simpliest cases
or are "not fully supported", meaning they throw exceptions and require you to manually split the query or rephrase it
Possible direct solution schemes
If ITEM_ID is a relation and XPCollection in PRICE class, then you could rewrite your query so that it fetches a PRICE object then builds up a result object and initializes its fields with PRICE object's properties. Something like:
return new ItemPricesViewModel()
{
Source = (from o in XpoSession.Query<PRICE>().AsEnumerable()
select new ItemPriceViewModel()
{
ID = o.ITEM_ID.ITEM_ID,
ItemCod = o.ITEM_ID.ITEM_COD,
....
ItemModifierID = o.ITEM_MODIFIER_ID.ITEM_MODIFIER_ID,
ItemPrices = (from d in o
where d.ITEM_ID.ITEM_ID == ....
select new Price()
.... .... ....
};
Note the 'AsEnumerable' that breaks the query and ensures that PRICE objects are first fetched instead of just trying to translate the query. Very probable that this would "just work".
Also, splitting the query into explicit stages sometimes help the XPO to analyze it:
return new ItemPricesViewModel()
{
Source = (from o in XpoSession.Query<PRICE>()
select new
{
id = o.ITEM_ID.ITEM_ID,
itemcod = o.ITEM_ID.ITEM_COD,
....
}
).AsEnumerable()
.Select(temp =>
select new ItemPriceViewModel()
{
ID = temp.id
ItemCod = temp.itemcod,
....
ItemPrices = (from d in XpoSession.Query<PRICE>()
where d.ITEM_ID.ITEM_ID == ....
select new Price()
.... .... ....
};
Here, note that I first fetch the item-data from server, and then conctruct the item on the 'client', and then build the required groupings. Note that I could not refer to the variable o anymore. In these precise case and examples, unsuprisingly, the second one (splitted) would be probably even slower than the first one, since it would fetch all PRICEs and then refetch the groupings through additional queries, while the first one would just fetch all PRICEs and then would calculate the groups in-memory basing on the PRICEs already fetched. This is not an a sideeffect of my laziness, but it is a common pitfall when rewriting the LINQ queries, so I included it as a warning :)
Both of these code examples are NOT RECOMMENDED for your case, as they would probably have very poor performace, especially if you have many PRICEs in the table, which is highly likely. I included them to present as only an example of how you could rewrite the query to siplify its structure so the XPO can eat it without choking. However, you have to be really careful and pay attention to little details, as you can very easily spoil the performance.
observations and real solution
However, it is worth noting that they are not that much worse than your original query. It was itself quite poor, since it tried to perform something near O(N^2) row-fetches from the table just to perform to group te rows by "ITEM_ID" and then formatting the results as separate objects. Properly done, it would be something like O(N lg N)+O(N), so regardless of being supported or not, your alternate attempt with GroupBy is surely a much better approach, and I'm really glad you found it yourself.
Very often when you are trying to split/simplify the XPQuery expressions as I did above, you implicitely rethink the problem and find an easier and simplier way to express the query that was initially not-supported or just were crashing.
Unfortunatelly, your query was in fact quite simple. For a really complex queries that cannot be "just rephrased", splitting into stages and making some of the join-filter work at 'client' is unavoidable.. But again, doing them on XPCollections or XPViews with CritieriaOperators is impossible too, so either we have to bear with it or use plain direct handcrafted SQL..
Sidenote:
Whole XPO has problems with "free joins", they are "not fully supported" not only in XPQuery, but also there's not much for them in XPCollection, XPView, CriteriaOperators, etc, too. But, it is worth noting that at least in "my version" of DX11, the XPQuery has very poor LINQ support at all.
I've hit many cases where a proper LINQ query was:
throwing "NotSupportedException", mostly in FreeJoins, but also very often with complex GroupBy or Select-with-Projection, GroupJoin, and many others - sometimes even Distinct(!) seemed to malfunction
throwing "NullReferenceExceptions" at some proper type conversions (XPO tried to interprete a column that held INT/NULL as an object..), often I had to write some completely odd and artificial expressions like foo!=null && foo.bar!=123 instead of foo = 123 despite the 'foo' being an public int Foo {get;set;}, all because the DX could not cope properly with NULLs in the database (because XPO created nullable-INT column for this property.. but that's another story)
throwing other random ArgumentException/InvalidOperation exceptions from other constructs
or even analyzing the query structure improperly, for example this one is usually valid:
session.Query<ABC>()
.Where( abc => abc.foo == "somefilter" )
.Select( abc => new { first = abc, b = abc } )
.ToArray();
but things like this one usually throws:
session.Query<ABC>()
.Select( abc => new { first = abc, b = abc } )
.Where ( temp => temp.first.foo == "somefilter" )
.ToArray();
but this one is valid:
session.Query<ABC>()
.Select( abc => new { first = abc, b = abc } )
.ToArray()
.Where ( temp => temp.first.foo == "somefilter" )
.ToArray();
The middle code example usually throws with an error that reveals that XPO layer were trying to find ".first.foo" path inside the ABC class, which is obviously wrong since at that point the element type isn't ABC anymore but instead a' anonymous class.
disclaimer
I've already noted that, but let me repeat: these observations are related to DX11 and most probably also earlier. I do not know what of that was fixed in DX12 and above (if anything at all was!).

Should I use `!IsGood` or `IsGood == false`?

I keep seeing code that does checks like this
if (IsGood == false)
{
DoSomething();
}
or this
if (IsGood == true)
{
DoSomething();
}
I hate this syntax, and always use the following syntax.
if (IsGood)
{
DoSomething();
}
or
if (!IsGood)
{
DoSomething();
}
Is there any reason to use '== true' or '== false'?
Is it a readability thing? Do people just not understand Boolean variables?
Also, is there any performance difference between the two?
I follow the same syntax as you, it's less verbose.
People (more beginner) prefer to use == true just to be sure that it's what they want. They are used to use operator in their conditional... they found it more readable. But once you got more advanced, you found it irritating because it's too verbose.
I always chuckle (or throw something at someone, depending on my mood) when I come across
if (someBoolean == true) { /* ... */ }
because surely if you can't rely on the fact that your comparison returns a boolean, then you can't rely on comparing the result to true either, so the code should become
if ((someBoolean == true) == true) { /* ... */ }
but, of course, this should really be
if (((someBoolean == true) == true) == true) { /* ... */ }
but, of course ...
(ah, compilation failed. Back to work.)
I would prefer shorter variant. But sometimes == false helps to make your code even shorter:
For real-life scenario in projects using C# 2.0 I see only one good reason to do this: bool? type. Three-state bool? is useful and it is easy to check one of its possible values this way.
Actually you can't use (!IsGood) if IsGood is bool?. But writing (IsGood.HasValue && IsGood.Value) is worse than (IsGood == true).
Play with this sample to get idea:
bool? value = true; // try false and null too
if (value == true)
{
Console.WriteLine("value is true");
}
else if (value == false)
{
Console.WriteLine("value is false");
}
else
{
Console.WriteLine("value is null");
}
There is one more case I've just discovered where if (!IsGood) { ... } is not the same as if (IsGood == false) { ... }. But this one is not realistic ;) Operator overloading may kind of help here :) (and operator true/false that AFAIK is discouraged in C# 2.0 because it is intended purpose is to provide bool?-like behavior for user-defined type and now you can get it with standard type!)
using System;
namespace BoolHack
{
class Program
{
public struct CrazyBool
{
private readonly bool value;
public CrazyBool(bool value)
{
this.value = value;
}
// Just to make nice init possible ;)
public static implicit operator CrazyBool(bool value)
{
return new CrazyBool(value);
}
public static bool operator==(CrazyBool crazyBool, bool value)
{
return crazyBool.value == value;
}
public static bool operator!=(CrazyBool crazyBool, bool value)
{
return crazyBool.value != value;
}
#region Twisted logic!
public static bool operator true(CrazyBool crazyBool)
{
return !crazyBool.value;
}
public static bool operator false(CrazyBool crazyBool)
{
return crazyBool.value;
}
#endregion Twisted logic!
}
static void Main()
{
CrazyBool IsGood = false;
if (IsGood)
{
if (IsGood == false)
{
Console.WriteLine("Now you should understand why those type is called CrazyBool!");
}
}
}
}
}
So... please, use operator overloading with caution :(
According to Code Complete a book Jeff got his name from and holds in high regards the following is the way you should treat booleans.
if (IsGood)
if (!IsGood)
I use to go with actually comparing the booleans, but I figured why add an extra step to the process and treat booleans as second rate types. In my view a comparison returns a boolean and a boolean type is already a boolean so why no just use the boolean.
Really what the debate comes down to is using good names for your booleans. Like you did above I always phrase my boolean objects in the for of a question. Such as
IsGood
HasValue
etc.
The technique of testing specifically against true or false is definitely bad practice if the variable in question is really supposed to be used as a boolean value (even if its type is not boolean) - especially in C/C++. Testing against true can (and probably will) lead to subtle bugs:
These apparently similar tests give opposite results:
// needs C++ to get true/false keywords
// or needs macros (or something) defining true/false appropriately
int main( int argc, char* argv[])
{
int isGood = -1;
if (isGood == true) {
printf( "isGood == true\n");
}
else {
printf( "isGood != true\n");
}
if (isGood) {
printf( "isGood is true\n");
}
else {
printf( "isGood is not true\n");
}
return 0;
}
This displays the following result:
isGood != true
isGood is true
If you feel the need to test variable that is used as a boolean flag against true/false (which shouldn't be done in my opinion), you should use the idiom of always testing against false because false can have only one value (0) while a true can have multiple possible values (anything other than 0):
if (isGood != false) ... // instead of using if (isGood == true)
Some people will have the opinion that this is a flaw in C/C++, and that may be true. But it's a fact of life in those languages (and probably many others) so I would stick to the short idiom, even in languages like C# that do not allow you to use an integral value as a boolean.
See this SO question for an example of where this problem actually bit someone...
isalpha() == true evaluates to false??
I agree with you (and am also annoyed by it). I think it's just a slight misunderstanding that IsGood == true evaluates to bool, which is what IsGood was to begin with.
I often see these near instances of SomeStringObject.ToString().
That said, in languages that play looser with types, this might be justified. But not in C#.
Some people find the explicit check against a known value to be more readable, as you can infer the variable type by reading. I'm agnostic as to whether one is better that the other. They both work. I find that if the variable inherently holds an "inverse" then I seem to gravitate toward checking against a value:
if(IsGood) DoSomething();
or
if(IsBad == false) DoSomething();
instead of
if(!IsBad) DoSomething();
But again, It doen't matter much to me, and I'm sure it ends up as the same IL.
Readability only..
If anything the way you prefer is more efficient when compiled into machine code. However I expect they produce exactly the same machine code.
From the answers so far, this seems to be the consensus:
The short form is best in most cases. (IsGood and !IsGood)
Boolean variables should be written as a positive. (IsGood instead of IsBad)
Since most compilers will output the same code either way, there is no performance difference, except in the case of interpreted languages.
This issue has no clear winner could probably be seen as a battle in the religious war of coding style.
I prefer to use:
if (IsGood)
{
DoSomething();
}
and
if (IsGood == false)
{
DoSomething();
}
as I find this more readable - the ! is just too easy to miss (in both reading and typing); also "if not IsGood then..." just doesn't sound right when I hear it, as opposed to "if IsGood is false then...", which sounds better.
It's possible (although unlikely, at least I hope) that in C code TRUE and FALSE are #defined to things other than 1 and 0. For example, a programmer might have decided to use 0 as "true" and -1 as "false" in a particular API. The same is true of legacy C++ code, since "true" and "false" were not always C++ keywords, particularly back in the day before there was an ANSI standard.
It's also worth pointing out that some languages--particularly script-y ones like Perl, JavaScript, and PHP--can have funny interpretations of what values count as true and what values count as false. It's possible (although, again, unlikely on hopes) that "foo == false" means something subtly different from "!foo". This question is tagged "language agnostic", and a language can define the == operator to not work in ways compatible with the ! operator.
I've seen the following as a C/C++ style requirement.
if ( true == FunctionCall()) {
// stuff
}
The reasoning was if you accidentally put "=" instead of "==", the compiler will bail on assigning a value to a constant. In the meantime it hurts the readability of every single if statement.
Occasionally it has uses in terms of readability. Sometimes a named variable or function call can end up being a double-negative which can be confusing, and making the expected test explicit like this can aid readability.
A good example of this might be strcmp() C/C++ which returns 0 if strings are equal, otherwise < or > 0, depending on where the difference is. So you will often see:
if(strcmp(string1, string2)==0) { /*do something*/ }
Generally however I'd agree with you that
if(!isCached)
{
Cache(thing);
}
is clearer to read.
I prefer the !IsGood approach, and I think most people coming from a c-style language background will prefer it as well. I'm only guessing here, but I think that most people that write IsGood == False come from a more verbose language background like Visual Basic.
Only thing worse is
if (true == IsGood) {....
Never understood the thought behind that method.
The !IsGood pattern is eaiser to find than IsGood == false when reduced to a regular expression.
/\b!IsGood\b/
vs
/\bIsGood\s*==\s*false\b/
/\bIsGood\s*!=\s*true\b/
/\bIsGood\s*(?:==\s*false|!=\s*true)\b/
For readability, you might consider a property that relies on the other property:
public bool IsBad => !IsGood;
Then, you can really get across the meaning:
if (IsBad)
{
...
}
I prefer !IsGood because to me, it is more clear and consise. Checking if a boolean == true is redundant though, so I would avoid that. Syntactically though, I don't think there is a difference checking if IsGood == false.
In many languages, the difference is that in one case, you are having the compiler/interpreter dictate the meaning of true or false, while in the other case, it is being defined by the code. C is a good example of this.
if (something) ...
In the above example, "something" is compared to the compiler's definition of "true." Usually this means "not zero."
if (something == true) ...
In the above example, "something" is compared to "true." Both the type of "true" (and therefor the comparability) and the value of "true" may or may not be defined by the language and/or the compiler/interpreter.
Often the two are not the same.
You forgot:
if(IsGood == FileNotFound)
It seems to me (though I have no proof to back this up) that people who start out in C#/java type languages prefer the "if (CheckSomething())" method, while people who start in other languages (C++: specifically Win32 C++) tend to use the other method out of habit: in Win32 "if (CheckSomething())" won't work if CheckSomething returns a BOOL (instead of a bool); and in many cases, API functions explicitly return a 0/1 int/INT rather than a true/false value (which is what a BOOL is).
I've always used the more verbose method, again, out of habit. They're syntactically the same; I don't buy the "verbosity irritates me" nonsense, because the programmer is not the one that needs to be impressed by the code (the computer does). And, in the real world, the skill level of any given person looking at the code I've written will vary, and I don't have the time or inclination to explain the peculiarities of statement evaluation to someone who may not understand little unimportant bits like that.
Ah, I have some co-worked favoring the longer form, arguing it is more readable than the tiny !
I started to "fix" that, since booleans are self sufficient, then I dropped the crusade... ^_^ They don't like clean up of code here, anyway, arguing it makes integration between branches difficult (that's true, but then you live forever with bad looking code...).
If you write correctly your boolean variable name, it should read naturally:
if (isSuccessful) vs. if (returnCode)
I might indulge in boolean comparison in some cases, like:
if (PropertyProvider.getBooleanProperty(SOME_SETTING, true) == true) because it reads less "naturally".
For some reason I've always liked
if (IsGood)
more than
if (!IsBad)
and that's why I kind of like Ruby's unless (but it's a little too easy to abuse):
unless (IsBad)
and even more if used like this:
raise InvalidColor unless AllowedColors.include?(color)
Cybis, when coding in C++ you can also use the not keyword. It's part of the standard since long time ago, so this code is perfectly valid:
if (not foo ())
bar ();
Edit: BTW, I forgot to mention that the standard also defines other boolean keywords such as and (&&), bitand (&), or (||), bitor (|), xor (^)... They are called operator synonyms.
If you really think you need:
if (Flag == true)
then since the conditional expression is itself boolean you probably want to expand it to:
if ((Flag == true) == true)
and so on. How many more nails does this coffin need?
If you happen to be working in perl you have the option of
unless($isGood)
I do not use == but sometime I use != because it's more clear in my mind. BUT at my job we do not use != or ==. We try to get a name that is significat if with hasXYZ() or isABC().
Personally, I prefer the form that Uncle Bob talks about in Clean Code:
(...)
if (ShouldDoSomething())
{
DoSomething();
}
(...)
bool ShouldDoSomething()
{
return IsGood;
}
where conditionals, except the most trivial ones, are put in predicate functions. Then it matters less how readable the implementation of the boolean expression is.
We tend to do the following here:
if(IsGood)
or
if(IsGood == false)
The reason for this is because we've got some legacy code written by a guy that is no longer here (in Delphi) that looks like:
if not IsNotGuam then
This has caused us much pain in the past, so it was decided that we would always try to check for the positive; if that wasn't possible, then compare the negative to false.
The only time I can think where the more vebose code made sense was in pre-.NET Visual Basic where true and false were actually integers (true=-1, false=0) and boolean expressions were considered false if they evaluated to zero and true for any other nonzero values. So, in the case of old VB, the two methods listed were not actually equivalent and if you only wanted something to be true if it evaluated to -1, you had to explicitly compare to 'true'. So, an expression that evaluates to "+1" would be true if evaluated as integer (because it is not zero) but it would not be equivalent to 'true'. I don't know why VB was designed that way, but I see a lot of boolean expressions comparing variables to true and false in old VB code.