Could you please explain the De Morgan's rules as simply as possible (e.g. to someone with only a secondary school mathematics background)?
Overview of boolean algebra
We have two values: T and F.
We can combine these values in three ways: NOT, AND, and OR.
NOT
NOT is the simplest:
NOT T = F
NOT F = T
We can write this as a truth table:
when given.. | results in...
============================
T | F
F | T
For conciseness
x | NOT x
=========
T | F
F | T
Think of NOT as the complement, that is, it turns one value into the other.
AND
AND works on two values:
x y | x AND y
=============
T T | T
T F | F
F T | F
F F | F
AND is T only when both its arguments (the values of x and y in the truth table) are T — and F otherwise.
OR
OR is T when at least one of its arguments is T:
x y | x OR y
=============
T T | T
T F | T
F T | T
F F | F
Combining
We can define more complex combinations. For example, we can write a truth table for x AND (y OR z), and the first row is below.
x y z | x AND (y OR z)
======================
T T T | ?
Once we know how to evaluate x AND (y OR z), we can fill in the rest of the table.
To evaluate the combination, evaluate the pieces and work up from there. The parentheses show which parts to evaluate first. What you know from ordinary arithmetic will help you work it out. Say you have 10 - (3 + 5). First you evaluate the part in parentheses to get
10 - 8
and evaluate that as usual to get the answer, 2.
Evaluating these expressions works similarly. We know how OR works from above, so we can expand our table a little:
x y z | y OR z | x AND (y OR z)
===============================
T T T | T | ?
Now it's almost like we're back to the x AND y table. We simply substitute the value of y OR z and evaluate. In the first row, we have
T AND (T OR T)
which is the same as
T AND T
which is simply T.
We repeat the same process for all 8 possible values of x, y, and z (2 possible values of x times 2 possible values of y times 2 possible values of z) to get
x y z | y OR z | x AND (y OR z)
===============================
T T T | T | T
T T F | T | T
T F T | T | T
T F F | F | F
F T T | T | F
F T F | T | F
F F T | T | F
F F F | F | F
Some expressions may be more complex than they need to be. For example,
x | NOT (NOT x)
===============
T | T
F | F
In other words, NOT (NOT x) is equivalent to just x.
DeMorgan's rules
DeMorgan's rules are handy tricks that let us convert between equivalent expressions that fit certain patterns:
NOT (x AND y) = (NOT x) OR (NOT y)
NOT (x OR y) = (NOT x) AND (NOT y)
(You might think of this as how NOT distributes through simple AND and OR expressions.)
Your common sense probably already understands these rules! For example, think of the bit of folk wisdom that "you can't be in two places at once." We could make it fit the first part of the first rule:
NOT (here AND there)
Applying the rule, that's another way of saying "you're not here or you're not there."
Exercise: How might you express the second rule in plain English?
For the first rule, let's look at the truth table for the expression on the left side of the equals sign.
x y | x AND y | NOT (x AND y)
=============================
T T | T | F
T F | F | T
F T | F | T
F F | F | T
Now the righthand side:
x y | NOT X | NOT Y | (NOT x) or (NOT y)
========================================
T T | F | F | F
T F | F | T | T
F T | T | F | T
F F | T | T | T
The final values are the same in both tables. This proves that the expressions are equivalent.
Exercise: Prove that the expressions NOT (x OR y) and (NOT x) AND (NOT y) are equivalent.
Looking over some of the answers, I think I can explain it better by using conditions that are actually related to each other.
DeMorgan's Law refers to the fact that there are two identical ways to write any combination of two conditions - specifically, the AND combination (both conditions must be true), and the OR combination (either one can be true). Examples are:
Part 1 of DeMorgan's Law
Statement: Alice has a sibling.
Conditions: Alice has a brother OR Alice has a sister.
Opposite: Alice is an only child (does NOT have a sibling).
Conditions: Alice does NOT have a brother, AND she does NOT have a sister.
In other words: NOT [A OR B] = [NOT A] AND [NOT B]
Part 2 of DeMorgan's Law
Statement: Bob is a car driver.
Conditions: Bob has a car AND Bob has a license.
Opposite: Bob is NOT a car driver.
Conditions: Bob does NOT have a car, OR Bob does NOT have a license.
In other words: NOT [A AND B] = [NOT A] OR [NOT B].
I think this would be a little less confusing to a 12-year-old. It's certainly less confusing than all this nonsense about truth tables (even I'm getting confused looking at all of those).
It is just a way to restate truth statements, which can provide simpler ways of writing conditionals to do the same thing.
In plain English:
When something is not This or That, it is also not this and not that.
When something is not this and that, it is also not this or not that.
Note: Given the imprecision of the English language on the word 'or' I am using it to mean a non-exclusive or in the preceding example.
For example the following pseudo-code is equivalent:
If NOT(A OR B)...
IF (NOT A) AND (NOT B)....
IF NOT(A AND B)...
IF NOT(A) OR NOT(B)...
If you're a police officer looking for underage drinkers, you can do one of the following, and De Morgan's law says they amount to the same thing:
FORMULATION 1 (A AND B)
If they're under the age
limit AND drinking an alcoholic
beverage, arrest them.
FORMULATION 2 (NOT(NOT A OR NOT B))
If they're over
the age limit OR drinking a
non-alcoholic beverage, let them go.
This, by the way, isn't my example. As far as I know, it was part of a scientific experiment where the same rule was expressed in different ways to find out how much of a difference it made in peoples' ability to understand them.
"He doesn't have either a car or a bus." means the same thing as "He doesn't have a car, and he doesn't have a bus."
"He doesn't have a car and a bus." means the same thing as "He either doesn't have a car, or doesn't have a bus, I'm not sure which, maybe he has neither."
Of course, in plain english "He doesn't have a car and a bus." has a strong implication that he has at least one of those two things. But, strictly speaking, from a logic standpoint the statement is also true if he doesn't have either of them.
Formally:
not (car or bus) = (not car) and (not bus)
not (car and bus) = (not car) or (not bus)
In english, 'or' tends to mean a choice, that you don't have both things. In logic, 'or' always means the same as 'and/or' in English.
Here's a truth table that shows how this works:
First case: not (cor or bus) = (not car) and (not bus)
c | b || c or b | not (c or b) || (not c) | (not b) | (not c) and (not b)
---+---++--------+--------------++---------+---------+--------------------
T | T || T | F || F | F | F
---+---++--------+--------------++---------+---------+--------------------
T | F || T | F || F | T | F
---+---++--------+--------------++---------+---------+--------------------
F | T || T | F || T | F | F
---+---++--------+--------------++---------+---------+--------------------
F | F || F | T || T | T | T
---+---++--------+--------------++---------+---------+--------------------
Second case: not (car and bus) = (not car) or (not bus)
c | b || c and b | not (c and b) || (not c) | (not b) | (not c) or (not b)
---+---++---------+---------------++---------+---------+--------------------
T | T || T | F || F | F | F
---+---++---------+---------------++---------+---------+--------------------
T | F || F | T || F | T | T
---+---++---------+---------------++---------+---------+--------------------
F | T || F | T || T | F | T
---+---++---------+---------------++---------+---------+--------------------
F | F || F | T || T | T | T
---+---++---------+---------------++---------+---------+--------------------
Draw a simple Venn diagram, two intersecting circles. Put A in the left and B in the right. Now (A and B) is obviously the intersecting bit. So NOT(A and B) is everything that's not in the intersecting bit, the rest of both circles. Colour that in.
Draw another two circles like before, A and B, intersecting. Now NOT(A) is everything that's in the right circle (B), but not the intersection, because that's obviously A as well as B. Colour this in. Similarly NOT(B) is everything in the left circle but not the intersection, because that's B as well as A. Colour this in.
Two drawings look the same. You've proved that NOT(A and B) = NOT(A) or NOT(B). T'other case is left as an exercise for the student.
DeMorgan's Law allows you to state a string of logical operations in different ways. It applies to logic and set theory, where in set theory you use complement for not, intersection for and, and union for or.
DeMorgan's Law allows you to simplify a logical expression, performing an operation that is rather similar to the distributive property of multiplication.
So, if you have the following in a C-like language
if !(x || y || z) { /* do something */ }
It is logically equivalent to:
if (!x && !y && !z)
It also works like so:
if !(x && !y && z)
turns into
if (!x || y || !z)
And you can, of course, go in reverse.
The equivalence of these statements is easy to see using something called a truth table. In a truth table, you simply lay out your variables (x, y, z) and list all the combinations of inputs for these variables. You then have columns for each predicate, or logical expression, and you determine for the given inputs, the value of the expression. Any university curriculum for computer science, computer engineering, or electrical engineering will likely drive you bonkers with the number and size of truth tables you must construct.
So why learn them? I think the biggest reason in computing is that it can improve readability of larger logical expressions. Some people don't like using logical not ! in front of expressions, as they think it can confuse someone if they miss it. The impact of using DeMorgan's Law on the gate level of chips is useful, however, because certain gate types are faster, cheaper, or you're already using a whole integrated circuit for them so you can reduce the number of chip packages required for the outcome.
Not sure why I've retained this all these years, but it has proven useful on a number of occasions. Thanks to Mr Bailey, my grade 10 math teacher. He called it deMorgan's Theorem.
!(A || B) <==> (!A && !B)
!(A && B) <==> (!A || !B)
When you move the negation in or out of the brackets, the logical operator (AND, OR) changes.
Related
I'm working with CUDD C++ interface (https://github.com/ivmai/cudd) but there is almost no information about this library. I would like to know how to remove one variable according to its value.
For example, I have now the next table stored in a bdd:
|-----|-----|-----|
| x1 | x2 | y |
|-----|-----|-----|
| 0 | 0 | 1 |
|-----|-----|-----|
| 0 | 1 | 1 |
|-----|-----|-----|
| 1 | 0 | 1 |
|-----|-----|-----|
| 1 | 1 | 0 |
|-----|-----|-----|
And I want to split the previous table in two separate bdds according to the value of x2 and remove that node afterwards:
If x2 = 0:
|-----|-----|
| x1 | y |
|-----|-----|
| 0 | 1 |
|-----|-----|
| 1 | 1 |
|-----|-----|
If x2 = 1:
|-----|-----|
| x1 | y |
|-----|-----|
| 0 | 1 |
|-----|-----|
| 1 | 0 |
|-----|-----|
Is it possible?
The reason that there is almost no documentation on the C++ interface of the CUDD library is that it is just a wrapper for the C functions, for which there is plenty of documentation.
The C++ wrapper is mainly useful for getting rid of all the Cudd_Ref(...) and Cudd_RecursiveDeref(...) calls that code using the C interface would need to do. Note that you can use the C interface from C++ code as well, if you want.
To do what you want to do, you have to combine the Boolean operators offered by CUDD in a way that you obtain a new Boolean function with the desired properties.
The first step is to restrict s to the x=0 and x=1 case:
BDD s0 = s & !x;
BDD s1 = s & x;
As you noticed, the new BDDs are not (yet) oblivious to the value of the x variable. You want them to be "don't care" w.r.t to the value of x. Since you already know that x is restricted to one particular value in s0 and s1, you can use the existential abstraction operator:
s0 = s0.ExistAbstract(x);
s1 = s1.ExistAbstract(x);
Note that x is used here as a so-called cube (see below).
These are now the BDDs that you want.
Cube explanation: If you abstract from multiple variables at the same time, you should compute such a cube from all the variables to be abstracted from first. A cube is mainly used for representing a set of variables. From mathematical logic, it is known that if you existentially or universally abstract away multiple variables, then the order to abstracting away these variables does not matter. Since the recursive BDD operations in CUDD are implemented over pairs (or triples) of BDDs whenever possible, CUDD internally represents a set of variables as a cube as well, so that an existential abstraction operation can just work on (1) the BDD for which existential abstraction is to be performed, and (2) the BDD representing the set of variables to be abstracted from. The internal representation of a cube as a BDD should not be of relevance to a developer just using CUDD (rather than extending CUDD), except that a BDDD representing a variable can also be used as a cube.
An approach using the Cython bindings to CUDD of the Python package dd is the following, which uses substitution of constant values for variable x2.
import dd.cudd as _bdd
bdd = _bdd.BDD()
bdd.declare('x1', 'x2')
# negated conjunction of the variables x1 and x2
u = bdd.add_expr(r'~ (x1 /\ x2)')
let = dict(x2=False)
v = bdd.let(let, u)
assert v == bdd.true, v
let = dict(x2=True)
w = bdd.let(let, u)
w_ = bdd.add_expr('~ x1')
assert w == w_, (w, w_)
The same code runs in pure Python by changing the import statement to import dd.autoref as _bdd. The pure Python version of dd can be installed with pip install dd. Installation of dd with the module dd.cudd is described here.
Suppose I have a database that contains two different types of information about certain unique objects, say their 'State' and 'Condition' to give a name to their classifiers. The State can take the values A, B, C or D, the condition the values X or Y. Depending on where I am sourcing data from, sometimes this database lacks entries for a particular pair.
From this data, I'd like to make a crosstab query that shows the count of data with a given State and Condition combination, but to have it still yield a row even when a given row is a 0. For example, I'd like the following table:
Unit | State | Condition
1 | A | X
2 | B | Y
3 | C | X
4 | B | Y
5 | B | X
6 | B | Y
7 | C | X
To produce the following crosstab:
Count | X | Y
A | 1 | 0
B | 1 | 3
C | 2 | 0
D | 0 | 0
Any help that would leave blanks instead of zeroes is fit for purpose as well, these are being pasted into a template Excel document that requires each crosstab to have an exact dimension.
What I've Tried:
The standard crosstab SQL
TRANSFORM Count(Unit)
SELECT Condition
FROM Sheet
GROUP BY Count(Unit)
PIVOT State;
obviously doesn't work as it doesn't raise the possibility of a D occurring. PIVOTing by a nested IIf that explicitly names D as a possible value does nothing either, nor does combining it with an Nz() around the TRANSFORM clause variable.
TRANSFORM Count(sheet.unit) AS CountOfunit
SELECT AllStates.state
FROM AllStates LEFT JOIN sheet ON AllStates.state = sheet.state
GROUP BY AllStates.state
PIVOT sheet.condition;
This uses a table "AllStates" that has a row for each state you want to force into the result. It will produce an extra column for entries that are neither Condition X nor Condition Y - that's where the forced entry for state D ends up, even though the count is 0.
If you have a relatively small number of conditions, you can use this instead:
SELECT AllStates.state, Sum(IIf([condition]="x",1,0)) AS X, Sum(IIf([condition]="Y",1,0)) AS Y
FROM AllStates LEFT JOIN sheet ON AllStates.state = sheet.state
GROUP BY AllStates.state;
Unlike a crosstab, though, this won't automatically add new columns when new condition codes are added to the data. It can also be cumbersome if you have many condition codes.
Good morning everyone,
Here's what I'm working on today, and the issue I'm running in to:
--A
data Row = A | B | C | D | E | F | G | H | I | J deriving (Enum, Ord, Show, Bounded, Eq, Read)
data Column = One | Two | Three | Four | Five | Six | Seven | Eight | Nine | Ten deriving (Enum, Ord, Show, Bounded, Eq, Read)
--B
data Address = Address Row Column deriving (Show, Read, Eq)
Then a few lines later I get to the problem child:
toAddress r c = Address(toEnum r, toEnum c)
I need to feed Address a Row and Column, but I need to turn r and c into Row and Column (not Ints)
Obviously toAddress is not structured correctly to carry out this task. The requirement is as follows:
Write a function toAddress that takes in a row and column, each in [0
− 9]. Construct an Address and return it. Use toEnum to index into
your Row and Column enums lists.
Does anyone have any suggestions on how to accomplish what I'm going for here?
Thank you!
You got the syntax wrong.
A function application of a function f :: A -> B -> C in haskell looks like this f a b and not f(a,b). f(a,b) still is correct syntax but not what you want: it passes only one parameter to the function (i.e. the tuple consisting of a and b).
So the correct implementation of toAddress looks like this:
toAddress r c = Address (toEnum r) (toEnum c)
I need to detect a commutative pattern in one of my functions. I thought that writing the following will do the work:
let my_fun a b = match a,b with
(*...*)
| a,b
| b,a when is_valid b -> process b (***)
(*...*)
This doesn't work and Ocaml complains with this sub-pattern is unused
warning for the line marked with (***).
1) Can someone explain to me what this warning try to say and why this doesn't work?
2) How can I actually write this elegantly without using if then else given the fact that I want to now which argument is_valid?
2) Is it possible to get the intended functionality using only pattern matching and without repeating when is_valid b -> process b as it happens bellow?
let my_fun a b = match a,b with
(*...*)
| a,b when is_valid b -> process b
| b,a when is_valid b -> process b
(*...*)
Edit:
In my concrete example a and b are pairs. The function is a bit more complicated but the following will illustrate the case:
let f a b = match a,b with
| (a1,a2),(b1,b2)
| (b1,b2),(a1,a2) when b1 = b2 -> a1 + a2
Calling f (1,1) (1,2) will yield pattern match failed. I know understand why (thanks to the answers bellow) and I understand how I can make it work if I have different constructors for each element (as in Ashish Agarwal's answer). Can you suggest a way to make it work in my case?
The matching works by first matching the pattern, and if that succeedes, then by evaluating the condition with the attached environment from that pattern match. Since a,b will always bind, this is the only case used, and the compiler is correctly reporting that b,a is never used. You'll have to repeat that line,
let my_fun a b = match a,b with
| a,b when is_valid b -> process b
| b,a when is_valid b -> process b
Your method could work if you didn't perform the match with variables but to some variant, for example,
let my_fun a b = match a,b with
| a, `Int b
| `Int b, a when is_valid b -> process b
Edit: Think of the multiple patterns using one guard as a subexpression,
let my_fun a b = match a,b with
| ((a,b) | (b,a)) when is_valid b -> process b
You'll see this exemplified in the definition for patterns. It's really one pattern, composed of patterns, being matched.
For your first question, the thing to realize is you only have one pattern ((a,b) | (b,a)), which happens to be an "or" pattern. Matching proceeds from left to right in an "or" pattern. Since (a,b) will match anything, the second part will never be used.
For your second question, I don't see the problem, but it depends on the types of a and b. Here's an example:
type t = A of int | B of float
let my_fun a b = match a,b with
| A a, B b
| B b, A a when b > 0. -> (float_of_int a) +. b
| … -> (* other cases *)
It would also work for simpler types:
let my_fun a b = match a,b with
| 1,b
| b,1 when b > 0 -> b + 1
| … -> (* other cases *)
If you still can't get this to work in your case, let us know the types of a and b you are working with.
I am trying to verify something for myself about operator and function precedence in Haskell. For instance, the following code
list = map foo $ xs
can be rewritten as
list = (map foo) $ (xs)
and will eventually be
list = map foo xs
My question used to be, why the first formulation would not be rewritten as
list = (map foo $) xs
since function precedence is always higher than operator precedence, but I think that I have found the answer: operators are simply not allowed to be arguments of functions (except of course, if you surround them with parentheses). Is this right? If so, I find it odd, that there is no mention of this mechanic/rule in RWH or Learn you a Haskell, or any of the other places that I have searched. So if you know a place, where the rule is stated, please link to it.
-- edit: Thank you for your quick answers. I think my confusion came from thinking that an operator literal would somehow evaluate to something, that could get consumed by a function as an argument. It helped me to remember, that an infix operator can be mechanically translated to a prefix functions. Doing this to the first formulation yields
($) (map foo) (xs)
where there is no doubt that ($) is the consuming function, and since the two formulations are equivalent, then the $ literal in the first formulation cannot be consumed by map.
Firstly, application (whitespace) is the highest precedence "operator".
Secondly, in Haskell, there's really no distinction between operators and functions, other than that operators are infix by default, while functions aren't. You can convert functions to infix with backticks
2 `f` x
and convert operators to prefix with parens:
(+) 2 3
So, your question is a bit confused.
Now, specific functions and operators will have declared precedence, which you can find in GHCi with ":info":
Prelude> :info ($)
($) :: (a -> b) -> a -> b -- Defined in GHC.Base
infixr 0 $
Prelude> :info (+)
class (Eq a, Show a) => Num a where
(+) :: a -> a -> a
infixl 6 +
Showing both precedence and associativity.
You are correct. This rule is part of the Haskell syntax defined by the Haskell Report. In particular note in Section 3, Expressions, that the argument to function application (an fexp) must be an aexp. An aexp allows operators as part of sections, and also within a parenthesized expression, but not bare operators.
In map foo $ xs, the Haskell syntax means that this is parsed as two expressions which are applied to the binary operator $. As sepp2k notes, the syntax (map foo $) is a left section and has a different meaning.
I have to confess I've never thought much about this and actually had to look it up in the Report to see why operators have the behavior they do.
In addition to the information provided by other answers already, note that different operators can have different precedences over other operators, as well as being left-/right- or non-associative.
You can find these properties for the Prelude operators in the Haskell 98 Report fixity section.
+--------+----------------------+-----------------------+-------------------+
| Prec- | Left associative | Non-associative | Right associative |
| edence | operators | operators | operators |
+--------+----------------------+-----------------------+-------------------+
| 9 | !! | | . |
| 8 | | | ^, ^^, ** |
| 7 | *, /, `div`, | | |
| | `mod`, `rem`, `quot` | | |
| 6 | +, - | | |
| 5 | | | :, ++ |
| 4 | | ==, /=, <, <=, >, >=, | |
| | | `elem`, `notElem` | |
| 3 | | | && |
| 2 | | | || |
| 1 | >>, >>= | | |
| 0 | | | $, $!, `seq` |
+--------+----------------------+-----------------------+-------------------+
Any operator lacking a fixity declaration is assumed to be left associative with precedence 9.
Remember, function application has highest precedence (think of precedence 10 compared to the other precedences in the table) [1].
The difference is that infix operators get placed between their arguments, so this
list = map foo $ xs
can be rewritten in prefix form as
list = ($) (map foo) xs
which, by the definition of the $ operator, is simply
list = (map foo) xs
Operators can be passed as function arguments if you surround them with parenthesis (i.e. map foo ($) xs, which would indeed be passed as (map foo ($)) xs). However if you do not surround them with parenthesis, you are correct that they cannot be passed as argument (or assigned to variables).
Also note that the syntax (someValue $) (where $ could be any operator) actually means something different: it is equivalent to \x -> someValue $ x, i.e. it partially applies the operator to its left operand (which in case of $ is a noop of course). Likewise ($ x) partially applies the operator to the right operand. So map ($ x) [f, g, h] would evaluate to [f x, g x, h x].