How does one incorporate Boolean logic in defined functions? - boolean-logic

I have got this to work but I am trying to intuitively understand how the if (see below...has_botton_and_top)statement works in this instance.. i.e how the boolean logic works :
def cylinder_surface_area(radius, height,has_top_and_bottom ):
side_area = height * 6.28 * radius
if has_top_and_bottom:
top_area = 2*3.14 * radius ** 2
return (side_area + top_area)
else:return side_area
print(cylinder_surface_area(10,5,False))

I don't use Python a whole lot, but when I do I really like using the Python IDLE editor. It gives you a pretty bare-bones editor for your code, that you can execute very easily. In terms of editor, it is entirely a matter of preference. As long as the editor that you are using suits all of your needs and you can use it, it is the best editor for you.
Now, in reference to your question: I can definitely understand where your confusion comes from. The way that Python treats variables(and consequently does not require strict typing of variables) was confusing for me at first as well. From the Python documentation,
Any object can be tested for truth value, for use in an if or while
condition or as operand of the Boolean operations below. The following
values are considered false:
None False zero of any numeric type, for example, 0, 0L, 0.0, 0j. any
empty sequence, for example, '', (), []. any empty mapping, for
example, {}. instances of user-defined classes, if the class defines a
__nonzero__() or __len__() method, when that method returns the integer zero or bool value False. All other values are considered true
-- so objects of many types are always true. Operations and built-in functions that have a Boolean result always return 0 or False for
false and 1 or True for true, unless otherwise stated. (Important
exception: the Boolean operations "or" and "and" always return one of
their operands.)
Thus, your function is expecting for you to enter a Boolean input, however even if you enter a String, an Integer, a Double, or anything else it will still be able to evaluate all of these with well-defined behavior. I am assuming that your confusion comes from the fact that you can evaluate any variable as True or False, and not how Boolean logic works at its core. If you need me to explain Boolean logic as well, feel free to comment and I'll add that too.

Related

Avoid even Option fields. Always empty string for String and 0 for Int optional fields

I have scala REST service based on JSON and Play Framework. Some of the fields of the JSON are optional (e.g. middleName). I can mark it Option e.g.
middleName: Option[String]
and even don't expect it in JSON. But I would like to avoid possible app errors in the future and simplify life. I would like to mark it as expectable but empty if user don't want to provide this info and have no Option fields throughout entire application (JSON/DB overhead is minor).
Is it good idea to avoid Option fields throughout the application? If the String field is empty, it contains an empty string but manadatory present in JSON/DB. If the Int field is empty it contains 0 etc
Thanks in advance
I think you would regret avoiding Option because of the loss of type safety. If you go passing around potentially null object references, everyone who touches them has to remember to check for null because there is nothing that forces them to do so. Failure to remember is a NullPointerException waiting to happen. The use of Option forces code to deal with the possibility that there is no value to work with; forgetting to do so will cause a compilation error:
case class Foo(name: Option[String])
...
if (foo1.name startsWith "/") // ERROR: no startsWith on Option
I very occasionally do use nulls in a very localized bit of code where I think either performance is critical or I have many, many objects and don't want to have all of those Some and None objects taking up memory, but I would never leak the null out across a public API. Using nulls is a complicating optimization that should only be used where the extra vigilance required to avoid catastrophe is justified by the benefit. Such cases are rare.
I am not entirely sure I understand what your needs are with regard to JSON, but it sounds like you might like to have Option fields not disappear from JSON documents. In Spray-json there is a NullOptions trait specifically for this. You simply mix it into your protocol type and it affects all of the JsonFormats defined within (you can have other protocol types that do "not" mix it in if you like), e.g.
trait FooJsonProtocol extends DefaultJsonProtocol with NullOptions {
// your jsonFormats
}
Without NullOptions, Option members with value None are omitted altogether; with it, they appear with null values. I think that it is clearer for users if you show the optional fields with null values rather than having them disappear, but for transmission efficiency you might want them omitted. With Spray-json, at least, you can pick.
I don't know whether other JSON packages have a similar option, but perhaps that will help you look for it if for some reason you don't want to use Spray-json (which, by the way, is very fast now).
I think that would depend on your business logic and how you want to use these values.
In the case of the middleName I am assuming you are using it primarily to address the user in a personal manner and you just concatenate title, firstName, middleName and lastName. So you treat the value exactly the same whether the user has specified it or not. So I think using an empty String instead of None might be preferable.
In the case of values where 0 or the "" is a valid value in terms of your business logic I would go with the Option[String], also in cases where you have different behaviours depending on whether the value is specified or not.
x match {
case 0 => foo
case _ => bar(_)
}
is less descriptive than
x match {
case Some(i) => bar(i)
case None => foo
}
It's a bad idea, because normally you want to handle the absence of something differently. If you pass a value of "" or 0 around, this can very easily be confused with a real value; you might end up sending an email that starts "Dear Mr ," or wishing them Happy 35th Birthday because the timestamp 0 comes out as 1st January 1970. If you keep a distinction between a value and None in code and in the type system, this forces you to think about whether a value is actually set and what you want to do if it isn't.
Don't blindly just push Options everywhere though, either. If it's an error for a value to not be supplied, you should check that immediately and throw an error as soon as possible, not wait until much later in your application when it will be harder to debug where that None came from.
It won't make your "life easier". If anything, it will make it harder, and instead of avoiding app errors will make them more likely. Your app code will have to be infested with checks like if(middleName != "") { doSomething(middleName); } or if(age == 0) "Unknown age" else age.toString, and you will have to rely on the programmer remembering to handle those "kinda-optional" fields in a special way.
All of this you could get "for free" using the monadic properties of Option with middleName.foreach(doSomething) or age.map(_.toString).getOrElse("")

using the OR function in Excel

I am just trying to use the OR function in Excel for analyzing some string variables. Instead of using a character count to verify whether someone anwered correctly, I would like to use the or function to accept more than one answer as correct.
For instance, return 1 if cell = "she was always there" or "she was there always".
How would I do this?
=IF(OR(A1="this",A1="that"),1,0)
IF takes three values:
The logical test
The value if true
The value if false
Each value can take other functions as it's argument. In this case, we use an OR function as the logical test. The OR can take any number of arguments (I'm sure there is a limit but I've never seen it). Each OR argument takes the form of a logical test and if any of the logical tests are TRUE, then the result of the OR is TRUE.
As a side note, you can nest many functions to create complex tests. But the nesting limit seems to be set at 7. Though there are tricks you can use to get around this, it makes reading the function later very difficult.
If you can live with "TRUE" or "FALSE" returns, then you don't need the IF function. Just:
=OR(A1="she was always there",A1="she was there always")
I found that by Googling "EXCEL OR FUNCTION"
http://office.microsoft.com/en-us/excel-help/or-function-HP010062403.aspx

True until disproven or false until proven?

I've noticed something about my coding that is slightly undefined. Say we have a two dimension array, a matrix or a table and we are looking through it to check if a property is true for every row or nested dimension.
Say I have a boolean flag that is to be used to check if a property is true or false. My options are to:
Initialize it to true and check each cell until proven false. This
gives it a wrong name until the code
is completely executed.
Start on false and check each row until proven true. Only if all rows are true will the data be correct. What is the cleanest way to do this, without a counter?
I've always done 1 without thinking but today it got me wondering. What about 2?
Depends on which one dumps you out of the loop first, IMHO.
For example, with an OR situation, I'd default to false, and as soon as you get a TRUE, return the result, otherwise return the default as the loop falls through.
For an AND situation, I'd do the opposite.
They actually both amount to the same thing and since you say "check if a property is true for every row or nested dimension", I believe the first method is easier to read and perhaps slightly faster.
You shouldn't try to read the value of the flag until the code is completely executed anyway, because the check isn't finished. If you're running asynchronous code, you should guard against accessing the value while it is unstable.
Both methods "give a wrong name" until the code is executed. 1 gives false positives and 2 gives false negatives. I'm not sure what you're trying to avoid by saying this - if you can get the "correct" value before fully running your code, you didn't have run your code in the first place.
How to implement each without a counter (if you don't have a foreach syntax in your language, use the appropriate enumerator->next loop syntax):
1:
bool flag = true;
foreach(item in array)
{
if(!check(item))
{
flag = false;
break;
}
}
2:
bool flag = false;
foreach(item in array)
{
if(!check(item))
{
break;
}
else if(item.IsLast())
{
flag = true;
}
}
Go with the first option. An algorithm always has preconditions, postconditions and invariants. If your invariant is "bool x is true iff all rows from 0-currentN have a positve property", then everything is fine.
Don't make your algorithm more complex just to make the full program-state valid per row-iteration. Refactor the method, extract it, and make it "atomic" with your languages mechanics (Java: synchronized).
Personally I just throw the whole loop into a somewhat reusable method/function called isPropertyAlwaysTrue(property, array[][]) and have it return a false directly if it finds that it finds a case where it's not true.
The inversion of logic, by the way, does not get you out of there any quicker. For instance, if you want the first non-true case, saying areAnyFalse or areAllTrue will have an inverted output, but will have to test the exact same cases.
This is also the case with areAnyTrue and areAllFalse--different words for the exact same algorithm (return as soon as you find a true).
You cannot compare areAnyFalse with areAnyTrue because they are testing for a completely different situation.
Make the property name something like isThisTrue. Then it's answered "yes" or "no" but it's always meaningful.
In Ruby and Scheme you can use a question mark in the name: isThisTrue?. In a lot of other languages, there is a convention of puttng "p" for "predicate" on the name -- null-p for a test returning true or false, in LISP.
I agree with Will Hartung.
If you are worried about (1) then just choose a better name for your boolean. IsNotSomething instead of IsSomething.

Best Practice: function return value or byref output parameters?

I have a function called FindSpecificRowValue that takes in a datatable and returns the row number that contains a particular value. If that value isn't found, I want to indicate so to the calling function.
Is the best approach to:
Write a function that returns false if not found, true if found, and the found row number as a byref/output parameter, or
Write a function that returns an int and pass back -999 if the row value isn't found, the row number if it is?
Personally I would not do either with that method name.
I would instead make two methods:
TryFindSpecificRow
FindSpecificRow
This would follow the pattern of Int32.Parse/TryParse, and in C# they could look like this:
public static Boolean TryFindSpecificRow(DataTable table, out Int32 rowNumber)
{
if (row-can-be-found)
{
rowNumber = index-of-row-that-was-found;
return true;
}
else
{
rowNumber = 0; // this value will not be used anyway
return false;
}
}
public static Int32 FindSpecificRow(DataTable table)
{
Int32 rowNumber;
if (TryFindSpecificRow(table, out rowNumber))
return rowNumber;
else
throw new RowNotFoundException(String.Format("Row {0} was not found", rowNumber));
}
Edit: Changed to be more appropriate to the question.
functions that fail should throw exceptions.
If failure is part of the expected flow then returning an out of band value is OK, except where you cannot pre-determine what an out-of-band value would be, in which case you have to throw an exception.
If I had to choose between your options I would choose option 2, but use a constant rather than -999...
You could also define return value as Nullable and return Nothing if nothing found.
I would choose option 2. Although I think I would just use -1 not -999.
Richard Harrison is right that a named constant is better than a bare -1 or -999.
I would go with 2, or some other variation where the return value indicates whether the value was found.
It seems that the value of the row the function returns (or provides a reference to) already indicates whether the value was found. If a value was not found, then it seems to make no sense to provide a row number that doesn't contain the value, so the return value should be -1, or Null, or whatever other value is suitable for the particular language. Otherwise, the fact that a row number was returned indicates the value was found.
Thus, there doesn't seem to be a need for a separate return value to indicate whether the value was found. However, type 1 might be appropriate if it fits with the idioms of the particular language, and the way function calls are performed in it.
Go with 2) but return -1 (or a null reference if returning a reference to the row), that idiom is uses extensively (including by by .nets indexOf (item) functions), it's what I'd probably do.
BTW -1 is more acceptable and widly used "magic number" than -999, thats the only reason why it's "correct" (quotes used there for a reason).
However much of this has to do with what you expect. Should the item always be in there, but you just don't know where? In that case return the index normally, and throw an error/exception if it's not there.
In this case, the item might not be there, and that's an okay condition. It's an error trap for unselected values in a GridView that binds to a datatable.
Another few possibilities not yet mentioned:
// Method 1: Supports covariance; can return default<T> on failure.
T TryGetThing(ref bool success);
// Method 2: Does not support covariance, but may allow cleaner code in some cases
// where calling code would use some particular value in in case of failure.
T TryGetThing(T DefaultValue);
// Method 3: Does not support covariance, but may allow cleaner code in some cases
// where calling code would use some particular value in case of failure, but should
// not take the time to compute that value except when necessary.
T TryGetThing(Func<T> AlternateGetMethod);
// Method 4: Does support covariance; ErrorMethod can throw if that's what should
// happen, or it can set some flag which is visible to the caller in some other way.
T TryGetThing(Action ErrorMethod);
The first approach is the reverse of the method Microsoft developed in the days before support existed for covariant interfaces. The last is in some ways the most versatile, but is likely to require the creation of a couple of new GC object instances (e.g. a closure and a delegate) each time it's used.

Has TRUE always had a non-zero value?

I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I've worked with, if you could even get a value for a boolean, that the value for FALSE is 0. Did TRUE used to be 0? If so, when did we switch?
The 0 / non-0 thing your coworker is confused about is probably referring to when people use numeric values as return value indicating success, not truth (i.e. in bash scripts and some styles of C/C++).
Using 0 = success allows for a much greater precision in specifying causes of failure (e.g. 1 = missing file, 2 = missing limb, and so on).
As a side note: in Ruby, the only false values are nil and false. 0 is true, but not as opposed to other numbers. 0 is true because it's an instance of the object 0.
It might be in reference to a result code of 0 which in most cases after a process has run, a result code of 0 meant, "Hey, everything worked fine, no problems here."
I worked at a company with a large amount of old C code. Some of the shared headers defined their own values for TRUE and FALSE, and some did indeed have TRUE as 0 and FALSE as 1. This led to "truth wars":
/* like my constants better */
#undef TRUE
#define TRUE 1
#undef FALSE
#define FALSE 0
If nothing else, bash shells still use 0 for true, and 1 for false.
Several functions in the C standard library return an 'error code' integer as result. Since noErr is defined as 0, a quick check can be 'if it's 0, it's Ok'. The same convention carried to a Unix process' 'result code'; that is, an integer that gave some inidication about how a given process finished.
In Unix shell scripting, the result code of a command just executed is available, and tipically used to signify if the command 'succeeded' or not, with 0 meaning success, and anything else a specific non-success condition.
From that, all test-like constructs in shell scripts use 'success' (that is, a result code of 0) to mean TRUE, and anything else to mean FALSE.
On a totally different plane, digital circuits frecuently use 'negative logic'. that is, even if 0 volts is called 'binary 0' and some positive value (commonly +5v or +3.3v, but nowadays it's not rare to use +1.8v) is called 'binary 1', some events are 'asserted' by a given pin going to 0. I think there's some noise-resistant advantages, but i'm not sure about the reasons.
Note, however that there's nothing 'ancient' or some 'switching time' about this. Everything I know about this is based on old conventions, but are totally current and relevant today.
I'm not certain, but I can tell you this: tricks relying on the underlying nature of TRUE and FALSE are prone to error because the definition of these values is left up to the implementer of the language (or, at the very least, the specifier).
System calls in the C standard library typically return -1 on error and 0 on success. Also the Fotran computed if statement would (and probably still does) jump to one of three line numbers depending on the condition evaluating to less than, equal to or greater than zero.
eg: IF (I-15) 10,20,10
would test for the condition of I == 15 jumping to line 20 if true (evaluates to zero) and line 10 otherwise.
Sam is right about the problems of relying on specific knowledge of implementation details.
General rule:
Shells (DOS included) use "0" as "No
Error"... not necessarily true.
Programming languages use non-zero
to denote true.
That said, if you're in a language which lets your define TRUE of FALSE, define it and always use the constants.
Even today, in some languages (Ruby, lisp, ...) 0 is true because everything except nil is true. More often 1 is true. That's a common gotcha and so it's sometimes considered a good practice to not rely on 0 being false, but to do an explicit test. Java requires you do this.
Instead of this
int x;
....
x = 0;
if (x) // might be ambiguous
{
}
Make is explicit
if (0 != x)
{
}
I recall doing some VB programming in an access form where True was -1.
I remember PL/1 had no boolean class. You could create a bit and assign it the result of a boolean expression. Then, to use it, you had to remember that 1 was false and 0 was true.
For the most part, false is defined as 0, and true is non-zero. Some programming languages use 1, some use -1, and some use any non-zero value.
For Unix shells though, they use the opposite convention.
Most commands that run in a Unix shell are actually small programs. They pass back an exit code so that you can determine whether the command was successful (a value of 0), or whether it failed for some reason (1 or more, depending on the type of failure).
This is used in the sh/ksh/bash shell interpreters within the if/while/until commands to check conditions:
if command
then
# successful
fi
If the command is successful (ie, returns a zero exit code), the code within the statement is executed. Usually, the command that is used is the [ command, which is an alias for the test command.
The funny thing is that it depends on the language your are working with. In Lua is true == zero internal for performance.. Same for many syscalls in C.
It's easy to get confused when bash's true/false return statements are the other way around:
$ false; echo $?
1
$ true; echo $?
0
I have heard of and used older compilers where true > 0, and false <= 0.
That's one reason you don't want to use if(pointer) or if(number) to check for zero, they might evaluate to false unexpectedly.
Similarly, I've worked on systems where NULL wasn't zero.
In the C language, before C++, there was no such thing as a boolean. Conditionals were done by testing ints. Zero meant false and any non-zero meant true. So you could write
if (2) {
alwaysDoThis();
} else {
neverDothis();
}
Fortunately C++ allowed a dedicated boolean type.
In any language I've ever worked in (going back to BASIC in the late 70s), false has been considered 0 and true has been non-zero.
I can't recall TRUE being 0.
0 is something a C programmer would return to indicate success, though. This can be confused with TRUE.
It's not always 1 either. It can be -1 or just non-zero.
For languages without a built in boolean type, the only convention that I have seen is to define TRUE as 1 and FALSE as 0. For example, in C, the if statement will execute the if clause if the conditional expression evaluates to anything other than 0.
I even once saw a coding guidelines document which specifically said not to redefine TRUE and FALSE. :)
If you are using a language that has a built in boolean, like C++, then keywords true and false are part of the language, and you should not rely on how they are actually implemented.
In languages like C there was no boolean value so you had to define your own. Could they have worked on a non-standard BOOL overrides?
DOS and exit codes from applications generally use 0 to mean success and non-zero to mean failure of some type!
DOS error codes are 0-255 and when tested using the 'errorlevel' syntax mean anything above or including the specified value, so the following matches 2 and above to the first goto, 1 to the second and 0 (success) to the final one!
IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR
The SQL Server Database Engine optimizes storage of bit columns. If there are 8 or less bit columns in a table, the columns are stored as 1 byte. If there are from 9 up to 16 bit columns, the columns are stored as 2 bytes, and so on.
The string values TRUE and FALSE can be converted to bit values: TRUE is converted to 1 and FALSE is converted to 0.
Converting to bit promotes any nonzero value to 1.
Every language can have 0 as true or false
So stop using number use words true
Lol
Or t and f 1 byte storage