I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I've worked with, if you could even get a value for a boolean, that the value for FALSE is 0. Did TRUE used to be 0? If so, when did we switch?
The 0 / non-0 thing your coworker is confused about is probably referring to when people use numeric values as return value indicating success, not truth (i.e. in bash scripts and some styles of C/C++).
Using 0 = success allows for a much greater precision in specifying causes of failure (e.g. 1 = missing file, 2 = missing limb, and so on).
As a side note: in Ruby, the only false values are nil and false. 0 is true, but not as opposed to other numbers. 0 is true because it's an instance of the object 0.
It might be in reference to a result code of 0 which in most cases after a process has run, a result code of 0 meant, "Hey, everything worked fine, no problems here."
I worked at a company with a large amount of old C code. Some of the shared headers defined their own values for TRUE and FALSE, and some did indeed have TRUE as 0 and FALSE as 1. This led to "truth wars":
/* like my constants better */
#undef TRUE
#define TRUE 1
#undef FALSE
#define FALSE 0
If nothing else, bash shells still use 0 for true, and 1 for false.
Several functions in the C standard library return an 'error code' integer as result. Since noErr is defined as 0, a quick check can be 'if it's 0, it's Ok'. The same convention carried to a Unix process' 'result code'; that is, an integer that gave some inidication about how a given process finished.
In Unix shell scripting, the result code of a command just executed is available, and tipically used to signify if the command 'succeeded' or not, with 0 meaning success, and anything else a specific non-success condition.
From that, all test-like constructs in shell scripts use 'success' (that is, a result code of 0) to mean TRUE, and anything else to mean FALSE.
On a totally different plane, digital circuits frecuently use 'negative logic'. that is, even if 0 volts is called 'binary 0' and some positive value (commonly +5v or +3.3v, but nowadays it's not rare to use +1.8v) is called 'binary 1', some events are 'asserted' by a given pin going to 0. I think there's some noise-resistant advantages, but i'm not sure about the reasons.
Note, however that there's nothing 'ancient' or some 'switching time' about this. Everything I know about this is based on old conventions, but are totally current and relevant today.
I'm not certain, but I can tell you this: tricks relying on the underlying nature of TRUE and FALSE are prone to error because the definition of these values is left up to the implementer of the language (or, at the very least, the specifier).
System calls in the C standard library typically return -1 on error and 0 on success. Also the Fotran computed if statement would (and probably still does) jump to one of three line numbers depending on the condition evaluating to less than, equal to or greater than zero.
eg: IF (I-15) 10,20,10
would test for the condition of I == 15 jumping to line 20 if true (evaluates to zero) and line 10 otherwise.
Sam is right about the problems of relying on specific knowledge of implementation details.
General rule:
Shells (DOS included) use "0" as "No
Error"... not necessarily true.
Programming languages use non-zero
to denote true.
That said, if you're in a language which lets your define TRUE of FALSE, define it and always use the constants.
Even today, in some languages (Ruby, lisp, ...) 0 is true because everything except nil is true. More often 1 is true. That's a common gotcha and so it's sometimes considered a good practice to not rely on 0 being false, but to do an explicit test. Java requires you do this.
Instead of this
int x;
....
x = 0;
if (x) // might be ambiguous
{
}
Make is explicit
if (0 != x)
{
}
I recall doing some VB programming in an access form where True was -1.
I remember PL/1 had no boolean class. You could create a bit and assign it the result of a boolean expression. Then, to use it, you had to remember that 1 was false and 0 was true.
For the most part, false is defined as 0, and true is non-zero. Some programming languages use 1, some use -1, and some use any non-zero value.
For Unix shells though, they use the opposite convention.
Most commands that run in a Unix shell are actually small programs. They pass back an exit code so that you can determine whether the command was successful (a value of 0), or whether it failed for some reason (1 or more, depending on the type of failure).
This is used in the sh/ksh/bash shell interpreters within the if/while/until commands to check conditions:
if command
then
# successful
fi
If the command is successful (ie, returns a zero exit code), the code within the statement is executed. Usually, the command that is used is the [ command, which is an alias for the test command.
The funny thing is that it depends on the language your are working with. In Lua is true == zero internal for performance.. Same for many syscalls in C.
It's easy to get confused when bash's true/false return statements are the other way around:
$ false; echo $?
1
$ true; echo $?
0
I have heard of and used older compilers where true > 0, and false <= 0.
That's one reason you don't want to use if(pointer) or if(number) to check for zero, they might evaluate to false unexpectedly.
Similarly, I've worked on systems where NULL wasn't zero.
In the C language, before C++, there was no such thing as a boolean. Conditionals were done by testing ints. Zero meant false and any non-zero meant true. So you could write
if (2) {
alwaysDoThis();
} else {
neverDothis();
}
Fortunately C++ allowed a dedicated boolean type.
In any language I've ever worked in (going back to BASIC in the late 70s), false has been considered 0 and true has been non-zero.
I can't recall TRUE being 0.
0 is something a C programmer would return to indicate success, though. This can be confused with TRUE.
It's not always 1 either. It can be -1 or just non-zero.
For languages without a built in boolean type, the only convention that I have seen is to define TRUE as 1 and FALSE as 0. For example, in C, the if statement will execute the if clause if the conditional expression evaluates to anything other than 0.
I even once saw a coding guidelines document which specifically said not to redefine TRUE and FALSE. :)
If you are using a language that has a built in boolean, like C++, then keywords true and false are part of the language, and you should not rely on how they are actually implemented.
In languages like C there was no boolean value so you had to define your own. Could they have worked on a non-standard BOOL overrides?
DOS and exit codes from applications generally use 0 to mean success and non-zero to mean failure of some type!
DOS error codes are 0-255 and when tested using the 'errorlevel' syntax mean anything above or including the specified value, so the following matches 2 and above to the first goto, 1 to the second and 0 (success) to the final one!
IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR
The SQL Server Database Engine optimizes storage of bit columns. If there are 8 or less bit columns in a table, the columns are stored as 1 byte. If there are from 9 up to 16 bit columns, the columns are stored as 2 bytes, and so on.
The string values TRUE and FALSE can be converted to bit values: TRUE is converted to 1 and FALSE is converted to 0.
Converting to bit promotes any nonzero value to 1.
Every language can have 0 as true or false
So stop using number use words true
Lol
Or t and f 1 byte storage
Related
I have got this to work but I am trying to intuitively understand how the if (see below...has_botton_and_top)statement works in this instance.. i.e how the boolean logic works :
def cylinder_surface_area(radius, height,has_top_and_bottom ):
side_area = height * 6.28 * radius
if has_top_and_bottom:
top_area = 2*3.14 * radius ** 2
return (side_area + top_area)
else:return side_area
print(cylinder_surface_area(10,5,False))
I don't use Python a whole lot, but when I do I really like using the Python IDLE editor. It gives you a pretty bare-bones editor for your code, that you can execute very easily. In terms of editor, it is entirely a matter of preference. As long as the editor that you are using suits all of your needs and you can use it, it is the best editor for you.
Now, in reference to your question: I can definitely understand where your confusion comes from. The way that Python treats variables(and consequently does not require strict typing of variables) was confusing for me at first as well. From the Python documentation,
Any object can be tested for truth value, for use in an if or while
condition or as operand of the Boolean operations below. The following
values are considered false:
None False zero of any numeric type, for example, 0, 0L, 0.0, 0j. any
empty sequence, for example, '', (), []. any empty mapping, for
example, {}. instances of user-defined classes, if the class defines a
__nonzero__() or __len__() method, when that method returns the integer zero or bool value False. All other values are considered true
-- so objects of many types are always true. Operations and built-in functions that have a Boolean result always return 0 or False for
false and 1 or True for true, unless otherwise stated. (Important
exception: the Boolean operations "or" and "and" always return one of
their operands.)
Thus, your function is expecting for you to enter a Boolean input, however even if you enter a String, an Integer, a Double, or anything else it will still be able to evaluate all of these with well-defined behavior. I am assuming that your confusion comes from the fact that you can evaluate any variable as True or False, and not how Boolean logic works at its core. If you need me to explain Boolean logic as well, feel free to comment and I'll add that too.
I've seen this in a few places, people using
if(null == myInstance)
instead of
if(myInstance == null)
Is the former just a matter of style or does it have any performance impact in front of the (more intuitive, in my opinion) latter?
It depends actually on the language you are using.
In the languages where only bool type is recognized as a condition inside if, if (myinstance == null) is preferred as it reads more naturally.
In the languages where other types can be treated (implicitly casted, etc.) as a condition the form if (null == myinstance) is usually preferred. Consider, for example, C. In C, ints and pointers are valid inside if. Additionally, assignment is treated as an expression. So if one mistakenly writes if (myinstance = NULL), this won't be detected by the compiler as an error. This is known to be a popular source of bugs, so the less readable but more reliable variant is recommended. Actually, some modern compilers issue a warning in case of assignment inside if (while, etc.), but not an error: after all, it's a valid expression in C.
You do that, to avoid that you by mistake set the value of myInstance to null.
Example for a typo:
if (null = myConstant) // no problem here, just won't compile
if (myConstant = null) // not good. Will compile anyway.
In general a lot of developers will put the constant part of the equation in first position for that reason.
Writing conditional checks this way is also referred to as Yoda Conditions.
Why is this not possible?
int c = 0;
++c++;
Or in PHP:
$c = 0;
++$c++;
I would expect it to increment the variable c by 2, or perhaps do something weird, but instead it gives an error while compiling. I've tried to come up with a reason but got nothing really... My reasoning was this:
The compiler reads ++
It reads the variable
It does whatever it does to make the value of the variable increment and then get returned when executing the application
It encounters another ++
It uses the previous variable in order to return it before incrementing the value
This is where it gets confusing: does it use the variable c, or does it try to read the value that (++c) returned? Or, since you can only do varname++ (and not 2++), does it see (++c) as a pointer and then tries to read that memory location? Whatever it does, why would it give a compile error? Or is the error preventive, because the compiler's programmer knew it wouldn't do anything useful?
It's not really that I would want to use this, and certainly not for code that is not one-time use only, but I'm just curious.
For the same reason you can't do:
c++ = 5;
It returns a value, which cannot be modified nor assigned to. That's not a runtime error, either - that's a compilation error. (Like this one.)
Returning a reference wouldn't make sense either, because then:
$a = 1;
$b = $a++; // How can it be a reference if b should be 1 and a should be 2?
This isn't necessarily language-agnostic, although I'd be a little surprised to see a language in which it was different.
In C (and hence I assume in all non-annoying languages based on C), the operator precedence means your expression is equivalent to ++(c++). Based on your step-by-step, you were expecting it to be equivalent to (++c)++, but it isn't. Postfix ++ "binds more tightly" than prefix ++.
Like everyone says, the expression c++ results in a value, not a modifiable object (an expression in C that refers to an object is called an lvalue). This is necessary because the object c no longer holds the value that the expression evaluates to -- there is no object that c++ could refer to.
In C++ (not to be confused with c++!), ++c is an lvalue. It refers to the object c, which now has the new value.
So in C++ (++c)++ is syntactically correct. It happens to have undefined behavior if c is of type int (at least, it did in C++03: C++11 made some things defined that used to be undefined and I'm not up to date on those changes).
So, if you imagine (or go ahead and invent) a C++-like language in which the operator precedence is what you expected, then you could arrange for ++c++ to be valid. Presumably it would increment c twice, and evaluate to the value in between the old and new values. This imagined language would be "annoying" in the sense that it's only subtly different from C++, which would tend to be confusing.
++c++ is also valid in C++ if c is an instance of a user-defined type that overloads both increment operators. The reason is that an rvalue of user-defined type is an object (a "temporary object"), and can be modified. But as soon as the expression has been evaluated, the temporary is destroyed and the modification is lost.
Another way to Explain this is that the ++ operator only operates on an lvalue. But when you combine them, it's parsed as either ++(c++) or (++c)++ -- in either case, the parameter to the operator outside the parentheses is an rvalue (c's value before or after the increment), not an lvalue.
I am just trying to use the OR function in Excel for analyzing some string variables. Instead of using a character count to verify whether someone anwered correctly, I would like to use the or function to accept more than one answer as correct.
For instance, return 1 if cell = "she was always there" or "she was there always".
How would I do this?
=IF(OR(A1="this",A1="that"),1,0)
IF takes three values:
The logical test
The value if true
The value if false
Each value can take other functions as it's argument. In this case, we use an OR function as the logical test. The OR can take any number of arguments (I'm sure there is a limit but I've never seen it). Each OR argument takes the form of a logical test and if any of the logical tests are TRUE, then the result of the OR is TRUE.
As a side note, you can nest many functions to create complex tests. But the nesting limit seems to be set at 7. Though there are tricks you can use to get around this, it makes reading the function later very difficult.
If you can live with "TRUE" or "FALSE" returns, then you don't need the IF function. Just:
=OR(A1="she was always there",A1="she was there always")
I found that by Googling "EXCEL OR FUNCTION"
http://office.microsoft.com/en-us/excel-help/or-function-HP010062403.aspx
I've noticed something about my coding that is slightly undefined. Say we have a two dimension array, a matrix or a table and we are looking through it to check if a property is true for every row or nested dimension.
Say I have a boolean flag that is to be used to check if a property is true or false. My options are to:
Initialize it to true and check each cell until proven false. This
gives it a wrong name until the code
is completely executed.
Start on false and check each row until proven true. Only if all rows are true will the data be correct. What is the cleanest way to do this, without a counter?
I've always done 1 without thinking but today it got me wondering. What about 2?
Depends on which one dumps you out of the loop first, IMHO.
For example, with an OR situation, I'd default to false, and as soon as you get a TRUE, return the result, otherwise return the default as the loop falls through.
For an AND situation, I'd do the opposite.
They actually both amount to the same thing and since you say "check if a property is true for every row or nested dimension", I believe the first method is easier to read and perhaps slightly faster.
You shouldn't try to read the value of the flag until the code is completely executed anyway, because the check isn't finished. If you're running asynchronous code, you should guard against accessing the value while it is unstable.
Both methods "give a wrong name" until the code is executed. 1 gives false positives and 2 gives false negatives. I'm not sure what you're trying to avoid by saying this - if you can get the "correct" value before fully running your code, you didn't have run your code in the first place.
How to implement each without a counter (if you don't have a foreach syntax in your language, use the appropriate enumerator->next loop syntax):
1:
bool flag = true;
foreach(item in array)
{
if(!check(item))
{
flag = false;
break;
}
}
2:
bool flag = false;
foreach(item in array)
{
if(!check(item))
{
break;
}
else if(item.IsLast())
{
flag = true;
}
}
Go with the first option. An algorithm always has preconditions, postconditions and invariants. If your invariant is "bool x is true iff all rows from 0-currentN have a positve property", then everything is fine.
Don't make your algorithm more complex just to make the full program-state valid per row-iteration. Refactor the method, extract it, and make it "atomic" with your languages mechanics (Java: synchronized).
Personally I just throw the whole loop into a somewhat reusable method/function called isPropertyAlwaysTrue(property, array[][]) and have it return a false directly if it finds that it finds a case where it's not true.
The inversion of logic, by the way, does not get you out of there any quicker. For instance, if you want the first non-true case, saying areAnyFalse or areAllTrue will have an inverted output, but will have to test the exact same cases.
This is also the case with areAnyTrue and areAllFalse--different words for the exact same algorithm (return as soon as you find a true).
You cannot compare areAnyFalse with areAnyTrue because they are testing for a completely different situation.
Make the property name something like isThisTrue. Then it's answered "yes" or "no" but it's always meaningful.
In Ruby and Scheme you can use a question mark in the name: isThisTrue?. In a lot of other languages, there is a convention of puttng "p" for "predicate" on the name -- null-p for a test returning true or false, in LISP.
I agree with Will Hartung.
If you are worried about (1) then just choose a better name for your boolean. IsNotSomething instead of IsSomething.