Python truth tables [duplicate] - boolean-logic

This question already has answers here:
De Morgan's rules explained
(8 answers)
Closed 1 year ago.
I am new to coding and taking a basic Python course online. The current section I'm on talks about truth tables with Boolean operators. I am following most of it, but there is one part that just doesn't make sense to me no matter how many times I go through it.
The idea is for each of the numbers 1 - 12 we need to say which results would ultimately be true and which would be false. I am fine with 1-5, 7-8, 10-12, but I keep getting 6 and 9 wrong.
At first I would evaluate the values inside the partenthesis first (for #6 A is True, B is False) and then reverse them with the not so it's (False or True). Since one of them is true I figured the whole thing would be True since it's an "or" but the answer says it's false. The same is the case with #9 just the final result is (True or False) but by following the same logic I would think the overall answer should be true.
I attempted to move on in the course to come back to this later, but the next thing they discussed was the ability to apply the distributive property to these expressions. Using that I went back to the example and came up with the same, apparently incorrect, answers. Viewing the whole statement as "not A or not B" I come to the same conclusion that #6 says "False or True" and #9 says "True or False" both of which seem to be to be True since we're using 'or' not 'and'
Please explain what I am missing here I just can't wrap my head around this. Thanks in advance!

If you evaluate the expression inside the parenthesis first you would get a single boolean value then the not operation is done on that value.
Eg:
1
a = True
b = False
print(a or b)
output
True
2
print(not (a or b)) # which is equal to print(not(True))
output
False

Related

Angular/HTML - How do you restrict RegEx results to values 1-10? [duplicate]

This question already has answers here:
Why is the regex to match 1 to 10 written as [1-9]|10 and not [1-10]?
(9 answers)
Closed 7 months ago.
I am working on an exercise where we are testing form validation in an Angular template. The restrictions for the one input field are that the selection should be a number from 1-10.
My first try was below, as I was taught using pipes could separate literals.
pattern="[1|2|3|4|5|6|7|8|9|10]"
When that did not work for 10, I tried the below two lines, which still did not let me include 10, but did allow 0.
pattern="[1|2|3|4|5|6|7|8|9|(10)]"
pattern="[1|2|3|4|5|6|7|8|9|(10)]{1}"
This is my first stackoverflow question, so I think I included enough, but I will provide more information if needed.
Use this RegEx: ^([1-9]|10)$
Explanation:
^ Assert position at start of line.
( Capture group open.
[1-9] Match a single character in the range between 1 and 9.
| Equivalent to logical OR.
) Capture group close.
$ Assert position at the end of the line.

How looks like an Expression-Tree (when function calls are involved)

I've found many places that shows expression-trees that involve operators (+,-,*, &&, ||, etc). Here is a simple example:
But I can not find an example when functions (with zero or more arguments) are involved.
How would following expression be represented using an Expression-Tree?
mid( "This is a string", 1*2, ceil( 4.2 ) ) == "is i"
Thanks a million in advance.
After weeks of researching, I was not able to find the "official" (academic) answer to this question. So I took my own path and I can tell it works smoothly.
I'm offering it here because so far no one gave an answer: just in the case this could help someone.
By asking this question, I wanted to know if I should place the arguments passed to a function as child nodes of the 'function node' or as a property (data) of the 'function node'.
After evaluating pros and cons of both options, and as nodes in an AST tree can store as many information as you need/want/please (the 2 siblings 'left' and 'right' are just the minimum), I thought this was going to be the easiest approach; it is easy to be implemented and it works perfectly.
This was my choice: place the arguments of the function as data into the 'function node'.
But if any one has a better answer, I beg you to share it here.
It might help to think of an expression tree as already being a way of representing functions applied to a set of arguments. For example, a - node has two children, which you can think of as representing the two ordered inputs to the “minus” function.
With that in mind, you can generalize your expression tree by allowing each node to contain an arbitrary function with one child per argument to the function. For example, if you have a function max that returns the maximum of two values, the max node would have two children. If you have a function median that takes three arguments and returns the median, it would have three children.

using the OR function in Excel

I am just trying to use the OR function in Excel for analyzing some string variables. Instead of using a character count to verify whether someone anwered correctly, I would like to use the or function to accept more than one answer as correct.
For instance, return 1 if cell = "she was always there" or "she was there always".
How would I do this?
=IF(OR(A1="this",A1="that"),1,0)
IF takes three values:
The logical test
The value if true
The value if false
Each value can take other functions as it's argument. In this case, we use an OR function as the logical test. The OR can take any number of arguments (I'm sure there is a limit but I've never seen it). Each OR argument takes the form of a logical test and if any of the logical tests are TRUE, then the result of the OR is TRUE.
As a side note, you can nest many functions to create complex tests. But the nesting limit seems to be set at 7. Though there are tricks you can use to get around this, it makes reading the function later very difficult.
If you can live with "TRUE" or "FALSE" returns, then you don't need the IF function. Just:
=OR(A1="she was always there",A1="she was there always")
I found that by Googling "EXCEL OR FUNCTION"
http://office.microsoft.com/en-us/excel-help/or-function-HP010062403.aspx

True until disproven or false until proven?

I've noticed something about my coding that is slightly undefined. Say we have a two dimension array, a matrix or a table and we are looking through it to check if a property is true for every row or nested dimension.
Say I have a boolean flag that is to be used to check if a property is true or false. My options are to:
Initialize it to true and check each cell until proven false. This
gives it a wrong name until the code
is completely executed.
Start on false and check each row until proven true. Only if all rows are true will the data be correct. What is the cleanest way to do this, without a counter?
I've always done 1 without thinking but today it got me wondering. What about 2?
Depends on which one dumps you out of the loop first, IMHO.
For example, with an OR situation, I'd default to false, and as soon as you get a TRUE, return the result, otherwise return the default as the loop falls through.
For an AND situation, I'd do the opposite.
They actually both amount to the same thing and since you say "check if a property is true for every row or nested dimension", I believe the first method is easier to read and perhaps slightly faster.
You shouldn't try to read the value of the flag until the code is completely executed anyway, because the check isn't finished. If you're running asynchronous code, you should guard against accessing the value while it is unstable.
Both methods "give a wrong name" until the code is executed. 1 gives false positives and 2 gives false negatives. I'm not sure what you're trying to avoid by saying this - if you can get the "correct" value before fully running your code, you didn't have run your code in the first place.
How to implement each without a counter (if you don't have a foreach syntax in your language, use the appropriate enumerator->next loop syntax):
1:
bool flag = true;
foreach(item in array)
{
if(!check(item))
{
flag = false;
break;
}
}
2:
bool flag = false;
foreach(item in array)
{
if(!check(item))
{
break;
}
else if(item.IsLast())
{
flag = true;
}
}
Go with the first option. An algorithm always has preconditions, postconditions and invariants. If your invariant is "bool x is true iff all rows from 0-currentN have a positve property", then everything is fine.
Don't make your algorithm more complex just to make the full program-state valid per row-iteration. Refactor the method, extract it, and make it "atomic" with your languages mechanics (Java: synchronized).
Personally I just throw the whole loop into a somewhat reusable method/function called isPropertyAlwaysTrue(property, array[][]) and have it return a false directly if it finds that it finds a case where it's not true.
The inversion of logic, by the way, does not get you out of there any quicker. For instance, if you want the first non-true case, saying areAnyFalse or areAllTrue will have an inverted output, but will have to test the exact same cases.
This is also the case with areAnyTrue and areAllFalse--different words for the exact same algorithm (return as soon as you find a true).
You cannot compare areAnyFalse with areAnyTrue because they are testing for a completely different situation.
Make the property name something like isThisTrue. Then it's answered "yes" or "no" but it's always meaningful.
In Ruby and Scheme you can use a question mark in the name: isThisTrue?. In a lot of other languages, there is a convention of puttng "p" for "predicate" on the name -- null-p for a test returning true or false, in LISP.
I agree with Will Hartung.
If you are worried about (1) then just choose a better name for your boolean. IsNotSomething instead of IsSomething.

Has TRUE always had a non-zero value?

I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I've worked with, if you could even get a value for a boolean, that the value for FALSE is 0. Did TRUE used to be 0? If so, when did we switch?
The 0 / non-0 thing your coworker is confused about is probably referring to when people use numeric values as return value indicating success, not truth (i.e. in bash scripts and some styles of C/C++).
Using 0 = success allows for a much greater precision in specifying causes of failure (e.g. 1 = missing file, 2 = missing limb, and so on).
As a side note: in Ruby, the only false values are nil and false. 0 is true, but not as opposed to other numbers. 0 is true because it's an instance of the object 0.
It might be in reference to a result code of 0 which in most cases after a process has run, a result code of 0 meant, "Hey, everything worked fine, no problems here."
I worked at a company with a large amount of old C code. Some of the shared headers defined their own values for TRUE and FALSE, and some did indeed have TRUE as 0 and FALSE as 1. This led to "truth wars":
/* like my constants better */
#undef TRUE
#define TRUE 1
#undef FALSE
#define FALSE 0
If nothing else, bash shells still use 0 for true, and 1 for false.
Several functions in the C standard library return an 'error code' integer as result. Since noErr is defined as 0, a quick check can be 'if it's 0, it's Ok'. The same convention carried to a Unix process' 'result code'; that is, an integer that gave some inidication about how a given process finished.
In Unix shell scripting, the result code of a command just executed is available, and tipically used to signify if the command 'succeeded' or not, with 0 meaning success, and anything else a specific non-success condition.
From that, all test-like constructs in shell scripts use 'success' (that is, a result code of 0) to mean TRUE, and anything else to mean FALSE.
On a totally different plane, digital circuits frecuently use 'negative logic'. that is, even if 0 volts is called 'binary 0' and some positive value (commonly +5v or +3.3v, but nowadays it's not rare to use +1.8v) is called 'binary 1', some events are 'asserted' by a given pin going to 0. I think there's some noise-resistant advantages, but i'm not sure about the reasons.
Note, however that there's nothing 'ancient' or some 'switching time' about this. Everything I know about this is based on old conventions, but are totally current and relevant today.
I'm not certain, but I can tell you this: tricks relying on the underlying nature of TRUE and FALSE are prone to error because the definition of these values is left up to the implementer of the language (or, at the very least, the specifier).
System calls in the C standard library typically return -1 on error and 0 on success. Also the Fotran computed if statement would (and probably still does) jump to one of three line numbers depending on the condition evaluating to less than, equal to or greater than zero.
eg: IF (I-15) 10,20,10
would test for the condition of I == 15 jumping to line 20 if true (evaluates to zero) and line 10 otherwise.
Sam is right about the problems of relying on specific knowledge of implementation details.
General rule:
Shells (DOS included) use "0" as "No
Error"... not necessarily true.
Programming languages use non-zero
to denote true.
That said, if you're in a language which lets your define TRUE of FALSE, define it and always use the constants.
Even today, in some languages (Ruby, lisp, ...) 0 is true because everything except nil is true. More often 1 is true. That's a common gotcha and so it's sometimes considered a good practice to not rely on 0 being false, but to do an explicit test. Java requires you do this.
Instead of this
int x;
....
x = 0;
if (x) // might be ambiguous
{
}
Make is explicit
if (0 != x)
{
}
I recall doing some VB programming in an access form where True was -1.
I remember PL/1 had no boolean class. You could create a bit and assign it the result of a boolean expression. Then, to use it, you had to remember that 1 was false and 0 was true.
For the most part, false is defined as 0, and true is non-zero. Some programming languages use 1, some use -1, and some use any non-zero value.
For Unix shells though, they use the opposite convention.
Most commands that run in a Unix shell are actually small programs. They pass back an exit code so that you can determine whether the command was successful (a value of 0), or whether it failed for some reason (1 or more, depending on the type of failure).
This is used in the sh/ksh/bash shell interpreters within the if/while/until commands to check conditions:
if command
then
# successful
fi
If the command is successful (ie, returns a zero exit code), the code within the statement is executed. Usually, the command that is used is the [ command, which is an alias for the test command.
The funny thing is that it depends on the language your are working with. In Lua is true == zero internal for performance.. Same for many syscalls in C.
It's easy to get confused when bash's true/false return statements are the other way around:
$ false; echo $?
1
$ true; echo $?
0
I have heard of and used older compilers where true > 0, and false <= 0.
That's one reason you don't want to use if(pointer) or if(number) to check for zero, they might evaluate to false unexpectedly.
Similarly, I've worked on systems where NULL wasn't zero.
In the C language, before C++, there was no such thing as a boolean. Conditionals were done by testing ints. Zero meant false and any non-zero meant true. So you could write
if (2) {
alwaysDoThis();
} else {
neverDothis();
}
Fortunately C++ allowed a dedicated boolean type.
In any language I've ever worked in (going back to BASIC in the late 70s), false has been considered 0 and true has been non-zero.
I can't recall TRUE being 0.
0 is something a C programmer would return to indicate success, though. This can be confused with TRUE.
It's not always 1 either. It can be -1 or just non-zero.
For languages without a built in boolean type, the only convention that I have seen is to define TRUE as 1 and FALSE as 0. For example, in C, the if statement will execute the if clause if the conditional expression evaluates to anything other than 0.
I even once saw a coding guidelines document which specifically said not to redefine TRUE and FALSE. :)
If you are using a language that has a built in boolean, like C++, then keywords true and false are part of the language, and you should not rely on how they are actually implemented.
In languages like C there was no boolean value so you had to define your own. Could they have worked on a non-standard BOOL overrides?
DOS and exit codes from applications generally use 0 to mean success and non-zero to mean failure of some type!
DOS error codes are 0-255 and when tested using the 'errorlevel' syntax mean anything above or including the specified value, so the following matches 2 and above to the first goto, 1 to the second and 0 (success) to the final one!
IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR
The SQL Server Database Engine optimizes storage of bit columns. If there are 8 or less bit columns in a table, the columns are stored as 1 byte. If there are from 9 up to 16 bit columns, the columns are stored as 2 bytes, and so on.
The string values TRUE and FALSE can be converted to bit values: TRUE is converted to 1 and FALSE is converted to 0.
Converting to bit promotes any nonzero value to 1.
Every language can have 0 as true or false
So stop using number use words true
Lol
Or t and f 1 byte storage