CUDD: Manipulation of BDDs - binary

I'm working with CUDD C++ interface (https://github.com/ivmai/cudd) but there is almost no information about this library. I would like to know how to remove one variable according to its value.
For example, I have now the next table stored in a bdd:
|-----|-----|-----|
| x1 | x2 | y |
|-----|-----|-----|
| 0 | 0 | 1 |
|-----|-----|-----|
| 0 | 1 | 1 |
|-----|-----|-----|
| 1 | 0 | 1 |
|-----|-----|-----|
| 1 | 1 | 0 |
|-----|-----|-----|
And I want to split the previous table in two separate bdds according to the value of x2 and remove that node afterwards:
If x2 = 0:
|-----|-----|
| x1 | y |
|-----|-----|
| 0 | 1 |
|-----|-----|
| 1 | 1 |
|-----|-----|
If x2 = 1:
|-----|-----|
| x1 | y |
|-----|-----|
| 0 | 1 |
|-----|-----|
| 1 | 0 |
|-----|-----|
Is it possible?

The reason that there is almost no documentation on the C++ interface of the CUDD library is that it is just a wrapper for the C functions, for which there is plenty of documentation.
The C++ wrapper is mainly useful for getting rid of all the Cudd_Ref(...) and Cudd_RecursiveDeref(...) calls that code using the C interface would need to do. Note that you can use the C interface from C++ code as well, if you want.
To do what you want to do, you have to combine the Boolean operators offered by CUDD in a way that you obtain a new Boolean function with the desired properties.
The first step is to restrict s to the x=0 and x=1 case:
BDD s0 = s & !x;
BDD s1 = s & x;
As you noticed, the new BDDs are not (yet) oblivious to the value of the x variable. You want them to be "don't care" w.r.t to the value of x. Since you already know that x is restricted to one particular value in s0 and s1, you can use the existential abstraction operator:
s0 = s0.ExistAbstract(x);
s1 = s1.ExistAbstract(x);
Note that x is used here as a so-called cube (see below).
These are now the BDDs that you want.
Cube explanation: If you abstract from multiple variables at the same time, you should compute such a cube from all the variables to be abstracted from first. A cube is mainly used for representing a set of variables. From mathematical logic, it is known that if you existentially or universally abstract away multiple variables, then the order to abstracting away these variables does not matter. Since the recursive BDD operations in CUDD are implemented over pairs (or triples) of BDDs whenever possible, CUDD internally represents a set of variables as a cube as well, so that an existential abstraction operation can just work on (1) the BDD for which existential abstraction is to be performed, and (2) the BDD representing the set of variables to be abstracted from. The internal representation of a cube as a BDD should not be of relevance to a developer just using CUDD (rather than extending CUDD), except that a BDDD representing a variable can also be used as a cube.

An approach using the Cython bindings to CUDD of the Python package dd is the following, which uses substitution of constant values for variable x2.
import dd.cudd as _bdd
bdd = _bdd.BDD()
bdd.declare('x1', 'x2')
# negated conjunction of the variables x1 and x2
u = bdd.add_expr(r'~ (x1 /\ x2)')
let = dict(x2=False)
v = bdd.let(let, u)
assert v == bdd.true, v
let = dict(x2=True)
w = bdd.let(let, u)
w_ = bdd.add_expr('~ x1')
assert w == w_, (w, w_)
The same code runs in pure Python by changing the import statement to import dd.autoref as _bdd. The pure Python version of dd can be installed with pip install dd. Installation of dd with the module dd.cudd is described here.

Related

Whats the best way to retrieve array data from MySql

I'm storing a object / data structure like this inside a MySql (actually a MariaDb) database:
{
idx: 7,
a: "content A",
b: "content B",
c: ["entry c1", "entry c2", "entry c3"]
}
And to store it I'm using 2 tables, very similar to the method described in this answer: https://stackoverflow.com/a/17371729/3958875
i.e.
Table 1:
+-----+---+---+
| idx | a | b |
+-----+---+---+
Table 2:
+------------+-------+
| owning_obj | entry |
+------------+-------+
And then made a view that joins them together, so I get this:
+-----+------------+------------+-----------+
| idx | a | b | c |
+-----+------------+------------+-----------+
| 7 | content A1 | content B1 | entry c11 |
| 7 | content A1 | content B1 | entry c21 |
| 7 | content A1 | content B1 | entry c31 |
| 8 | content A2 | content B2 | entry c12 |
| 8 | content A2 | content B2 | entry c22 |
| 8 | content A2 | content B2 | entry c32 |
+-----+------------+------------+-----------+
My question is what is the best way I can get it back to my object form? (e.g. I want an array of the object type specified above of all entries with idx between 5 and 20)
There are 2 ways I can think of, but both seem to be not very efficient.
Firstly we can just send this whole table back to the server, and it can make a hashmap with the keys being the primary key or some other unique index, and collect up the different c columns, and rebuild it that way, but that means it has to send a lot of duplicate data, and take a bit more memory and processing time to rebuild on the server. This method also won't be very pleasant to scale if we have multiple arrays, or have arrays within arrays.
Second method would be to do multiple queries, filter Table 1 and get back the list of idx's you want, and then for each idx, send a query for Table 2 where owning_obj = current idx. This would mean sending a whole lot more queries.
Neither of these options seems very good, so I'm wondering if there is a better way. Currently I'm thinking it can be something that utilizes JSON_OBJECT(), but I'm not sure how.
This seems like a common situation, but I can't seem to find the exact wording to search for to get the answer.
PS: The server interfacing with MySql/MariaDb is written in Rust, don't think this is relevant in this question though
You can use GROUP_CONCAT to combine all the c values into a comma-separated string.
SELECT t1.idx, t1.a, t1.b, GROUP_CONCAT(entry) AS c
FROM table1 AS t1
LEFT JOIN table2 AS t2 ON t1.idx = t2.owning_obj
GROUP BY t1.idx
Then explode the string in PHP:
$result_array = [];
while ($row = $result->fetch_assoc()) {
$row['c'] = explode(',', $row['c']);
$result_array[] = $row;
}
However, if the entries can be long, make sure you increase group_concat_max_len.
If you're using MySQL 8.0 you can also use JSON_ARRAYAGG(). This will create a JSON array of the entry values, which you can convert to a PHP array using json_decode(). This is a little safer, since GROUP_CONCAT() will mess up if any of the values contain comma. You can change the separator, but you need a separator that will never be in any values. Unfortunately, this isn't in MariaDB.

Unique combinations of variables in Stata

I need assistance with getting a Stata code that can get me unique combinations of varibles. I have 7 variables and I need to run a code that can give me a unique combination of all of these variables. Every row will be a unique combination of all 7 variables.
An example:
V1: A, B, C
V2: 1, 2, 3
A1 A2 A3, B1 B2 B3, C1 C2 C3
Unique combination of all variables - total 9 combinations.
I have 15000 observations. I got a code in R but R won't get the output on a large data (memory error). I want to get this in Stata.
It is not especially clear what you want created or done. There is no code here, not even R code showing how what you want is done in R. There is no reproducible example.
You might want to check out egen, group(). (A previous answer to this effect from #Dimitriy V. Masterov, an experienced user of Stata, was twice incorrectly deleted as spam, presumably by people not knowing Stata.)
Alternatively, try installing groups from SSC.
UPDATE: The answer sounds more like fillin. For "unique" read "distinct".
Bit of a late response, but I just stumbled across this today. If I understand the question, Something like this should do the trick, although I'm not sure it's easily applied to more complex data or if this would even be the best way...
* Create Sample Data
clear
set obs 3
gen str var1 = "a" in 1
replace var1="b" in 2
replace var1="c" in 3
gen var2= _n
* Find number of Unique Groupings to set obs
by var1 var2, sort: gen groups=_n==1
keep if groups==1
drop groups
di _N^2
set obs 9
* Create New Variable
forvalues i = 4(3)9 {
forvalues j = 5(3)9 {
forvalues k = 6(3)9 {
replace var1="a" if _n==`i'
replace var1="b" if _n==`j'
replace var1="c" if _n==`k'
}
}
}
sort var1
egen i=seq(), f(1) t(3)
tostring i, replace
gen NewVar=var1+i
list NewVar
+--------+
| NewVar |
|--------|
1. | a1 |
2. | a2 |
3. | a3 |
4. | b1 |
5. | b2 |
|--------|
6. | b3 |
7. | c1 |
8. | c2 |
9. | c3 |
+--------+
Unfortunately as far as I know, there is no easy way to do this - it will require a fair amount of code. Although, I saw another answer or comment that mentioned cross which could be very useful here. Another command worth checking out is joinby. But even with either of these methods, you will have to split your data into 7 different sets based on the variables you want to 'cross combine'.
Anyway, Good Luck if you haven't yet found your solution.
If you just want the combination of that 7 variables, you can do it like this:
keep v1 v2 v3 v4 v5 v6 v7
duplicates drop
list
Then you will get the list of unique combinations of those 7 variables. You can save the file with different name from the original dataset. Please make sure that you do not save the dataset directly. Otherwise you will lose your original data.

Haskell operator vs function precedence

I am trying to verify something for myself about operator and function precedence in Haskell. For instance, the following code
list = map foo $ xs
can be rewritten as
list = (map foo) $ (xs)
and will eventually be
list = map foo xs
My question used to be, why the first formulation would not be rewritten as
list = (map foo $) xs
since function precedence is always higher than operator precedence, but I think that I have found the answer: operators are simply not allowed to be arguments of functions (except of course, if you surround them with parentheses). Is this right? If so, I find it odd, that there is no mention of this mechanic/rule in RWH or Learn you a Haskell, or any of the other places that I have searched. So if you know a place, where the rule is stated, please link to it.
-- edit: Thank you for your quick answers. I think my confusion came from thinking that an operator literal would somehow evaluate to something, that could get consumed by a function as an argument. It helped me to remember, that an infix operator can be mechanically translated to a prefix functions. Doing this to the first formulation yields
($) (map foo) (xs)
where there is no doubt that ($) is the consuming function, and since the two formulations are equivalent, then the $ literal in the first formulation cannot be consumed by map.
Firstly, application (whitespace) is the highest precedence "operator".
Secondly, in Haskell, there's really no distinction between operators and functions, other than that operators are infix by default, while functions aren't. You can convert functions to infix with backticks
2 `f` x
and convert operators to prefix with parens:
(+) 2 3
So, your question is a bit confused.
Now, specific functions and operators will have declared precedence, which you can find in GHCi with ":info":
Prelude> :info ($)
($) :: (a -> b) -> a -> b -- Defined in GHC.Base
infixr 0 $
Prelude> :info (+)
class (Eq a, Show a) => Num a where
(+) :: a -> a -> a
infixl 6 +
Showing both precedence and associativity.
You are correct. This rule is part of the Haskell syntax defined by the Haskell Report. In particular note in Section 3, Expressions, that the argument to function application (an fexp) must be an aexp. An aexp allows operators as part of sections, and also within a parenthesized expression, but not bare operators.
In map foo $ xs, the Haskell syntax means that this is parsed as two expressions which are applied to the binary operator $. As sepp2k notes, the syntax (map foo $) is a left section and has a different meaning.
I have to confess I've never thought much about this and actually had to look it up in the Report to see why operators have the behavior they do.
In addition to the information provided by other answers already, note that different operators can have different precedences over other operators, as well as being left-/right- or non-associative.
You can find these properties for the Prelude operators in the Haskell 98 Report fixity section.
+--------+----------------------+-----------------------+-------------------+
| Prec- | Left associative | Non-associative | Right associative |
| edence | operators | operators | operators |
+--------+----------------------+-----------------------+-------------------+
| 9 | !! | | . |
| 8 | | | ^, ^^, ** |
| 7 | *, /, `div`, | | |
| | `mod`, `rem`, `quot` | | |
| 6 | +, - | | |
| 5 | | | :, ++ |
| 4 | | ==, /=, <, <=, >, >=, | |
| | | `elem`, `notElem` | |
| 3 | | | && |
| 2 | | | || |
| 1 | >>, >>= | | |
| 0 | | | $, $!, `seq` |
+--------+----------------------+-----------------------+-------------------+
Any operator lacking a fixity declaration is assumed to be left associative with precedence 9.
Remember, function application has highest precedence (think of precedence 10 compared to the other precedences in the table) [1].
The difference is that infix operators get placed between their arguments, so this
list = map foo $ xs
can be rewritten in prefix form as
list = ($) (map foo) xs
which, by the definition of the $ operator, is simply
list = (map foo) xs
Operators can be passed as function arguments if you surround them with parenthesis (i.e. map foo ($) xs, which would indeed be passed as (map foo ($)) xs). However if you do not surround them with parenthesis, you are correct that they cannot be passed as argument (or assigned to variables).
Also note that the syntax (someValue $) (where $ could be any operator) actually means something different: it is equivalent to \x -> someValue $ x, i.e. it partially applies the operator to its left operand (which in case of $ is a noop of course). Likewise ($ x) partially applies the operator to the right operand. So map ($ x) [f, g, h] would evaluate to [f x, g x, h x].

What data structures can efficiently store 2-d "grid" data?

I am trying to write an application that performs operations on a grid of numbers, where each time a function runs the value of each cell is changed, and the value of each cell is dependent on its neighbours. The value of each cell would be a simple integer.
What would be the best way of storing my data here? I've considered both a flat list/array structure, but that seems ineffective as I have to repeatedly do calculations to work out which cell is 'above' the current cell (when there is an arbitrary grid size) and nested lists, which doesn't seem to be a very good way of representing the data.
I can't help but feel there must be a better way of representing this data in memory for this sort of purpose. Any ideas?
(note, I don't think this is really a subjective question - but stack overflow seems to think it is.. I'm kinda hoping there's an accepted way this sort of data is stored)
Here are a few approaches. I'll (try to) illustrate these examples with a representation of a 3x3 grid.
The flat array
+---+---+---+---+---+---+---+---+---+
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+---+---+---+---+---+---+---+---+---+
a[row*width + column]
To access elements on the left or right, subtract or add 1 (take care at the row boundaries). To access elements above or below, subtract or add the row size (in this case 3).
The two dimensional array (for languages such as C or FORTRAN that support this)
+-----+-----+-----+
| 0,0 | 0,1 | 0,2 |
+-----+-----+-----+
| 1,0 | 1,1 | 1,2 |
+-----+-----+-----+
| 2,0 | 2,1 | 2,2 |
+-----+-----+-----+
a[row,column]
a[row][column]
Accessing adjacent elements is just incrementing or decrementing either the row or column number. The compiler is still doing exactly the same arithmetic as in the flat array.
The array of arrays (eg. Java)
+---+ +---+---+---+
| 0 |-->| 0 | 1 | 2 |
+---+ +---+---+---+
| 1 |-->| 0 | 1 | 2 |
+---+ +---+---+---+
| 2 |-->| 0 | 1 | 2 |
+---+ +---+---+---+
a[row][column]
In this method, a list of "row pointers" (represented on the left) each is a new, independent array. Like the 2-d array, adjacent elements are accessed by adjusting the appropriate index.
Fully linked cells (2-d doubly linked list)
+---+ +---+ +---+
| 0 |-->| 1 |-->| 2 |
| |<--| |<--| |
+---+ +---+ +---+
^ | ^ | ^ |
| v | v | v
+---+ +---+ +---+
| 3 |-->| 4 |-->| 5 |
| |<--| |<--| |
+---+ +---+ +---+
^ | ^ | ^ |
| v | v | v
+---+ +---+ +---+
| 6 |-->| 7 |-->| 8 |
| |<--| |<--| |
+---+ +---+ +---+
This method has each cell containing up to four pointers to its adjacent elements. Access to adjacent elements is through the appropriate pointer. You will need to still keep a structure of pointers to elements (probably using one of the above methods) to avoid having to step through each linked list sequentially. This method is a bit unwieldy, however it does have an important application in Knuth's Dancing Links algorithm, where the links are modified during execution of the algorithm to skip over "blank" space in the grid.
If lookup time is important to you, then a 2-dimensional array might be your best choice since looking up a cell's neighbours is a constant time operation given the (x,y) coordinates of the cell.
Further to my comment, you may find the Hashlife algorithm interesting.
Essentially (if I understand it correctly), you store your data in a quad-tree with a hash table pointing to nodes of the tree. The idea here is that the same pattern may occur more than once in your grid, and each copy will hash to the same value, thus you only have to compute it once.
This is true for Life, which is a grid of mostly-false booleans. Whether it's true for your problem, I don't know.
A dynamically allocated array of arrays makes it trivial to point to the cell above the current cell, and supports arbitrary grid sizes as well.
You should abstract from how you store your data.
If you need to do relative operations inside array, Slice is the common patterd to do it.
You could have something like this:
public interface IArray2D<T>
{
T this[int x, int y] { get; }
}
public class Array2D<T> : IArray2D<T>
{
readonly T[] _values;
public readonly int Width;
public readonly int Height;
public Array2D(int width, int height)
{
Width = width;
Height = height;
_values = new T[width * height];
}
public T this[int x, int y]
{
get
{
Debug.Assert(x >= 0);
Debug.Assert(x < Width);
Debug.Assert(y >= 0);
Debug.Assert(y < Height);
return _values[y * Width + x];
}
}
public Slice<T> Slice(int x0, int y0)
{
return new Slice<T>(this, x0, y0);
}
}
public class Slice<T> : IArray2D<T>
{
readonly IArray2D<T> _underlying;
readonly int _x0;
readonly int _y0;
public Slice(IArray2D<T> underlying, int x0, int y0)
{
_underlying = underlying;
_x0 = x0;
_y0 = y0;
}
public T this[int x, int y]
{
get { return _underlying[_x0 + x, _y0 + y]; }
}
}

De Morgan's rules explained

Could you please explain the De Morgan's rules as simply as possible (e.g. to someone with only a secondary school mathematics background)?
Overview of boolean algebra
We have two values: T and F.
We can combine these values in three ways: NOT, AND, and OR.
NOT
NOT is the simplest:
NOT T = F
NOT F = T
We can write this as a truth table:
when given.. | results in...
============================
T | F
F | T
For conciseness
x | NOT x
=========
T | F
F | T
Think of NOT as the complement, that is, it turns one value into the other.
AND
AND works on two values:
x y | x AND y
=============
T T | T
T F | F
F T | F
F F | F
AND is T only when both its arguments (the values of x and y in the truth table) are T — and F otherwise.
OR
OR is T when at least one of its arguments is T:
x y | x OR y
=============
T T | T
T F | T
F T | T
F F | F
Combining
We can define more complex combinations. For example, we can write a truth table for x AND (y OR z), and the first row is below.
x y z | x AND (y OR z)
======================
T T T | ?
Once we know how to evaluate x AND (y OR z), we can fill in the rest of the table.
To evaluate the combination, evaluate the pieces and work up from there. The parentheses show which parts to evaluate first. What you know from ordinary arithmetic will help you work it out. Say you have 10 - (3 + 5). First you evaluate the part in parentheses to get
10 - 8
and evaluate that as usual to get the answer, 2.
Evaluating these expressions works similarly. We know how OR works from above, so we can expand our table a little:
x y z | y OR z | x AND (y OR z)
===============================
T T T | T | ?
Now it's almost like we're back to the x AND y table. We simply substitute the value of y OR z and evaluate. In the first row, we have
T AND (T OR T)
which is the same as
T AND T
which is simply T.
We repeat the same process for all 8 possible values of x, y, and z (2 possible values of x times 2 possible values of y times 2 possible values of z) to get
x y z | y OR z | x AND (y OR z)
===============================
T T T | T | T
T T F | T | T
T F T | T | T
T F F | F | F
F T T | T | F
F T F | T | F
F F T | T | F
F F F | F | F
Some expressions may be more complex than they need to be. For example,
x | NOT (NOT x)
===============
T | T
F | F
In other words, NOT (NOT x) is equivalent to just x.
DeMorgan's rules
DeMorgan's rules are handy tricks that let us convert between equivalent expressions that fit certain patterns:
NOT (x AND y) = (NOT x) OR (NOT y)
NOT (x OR y) = (NOT x) AND (NOT y)
(You might think of this as how NOT distributes through simple AND and OR expressions.)
Your common sense probably already understands these rules! For example, think of the bit of folk wisdom that "you can't be in two places at once." We could make it fit the first part of the first rule:
NOT (here AND there)
Applying the rule, that's another way of saying "you're not here or you're not there."
Exercise: How might you express the second rule in plain English?
For the first rule, let's look at the truth table for the expression on the left side of the equals sign.
x y | x AND y | NOT (x AND y)
=============================
T T | T | F
T F | F | T
F T | F | T
F F | F | T
Now the righthand side:
x y | NOT X | NOT Y | (NOT x) or (NOT y)
========================================
T T | F | F | F
T F | F | T | T
F T | T | F | T
F F | T | T | T
The final values are the same in both tables. This proves that the expressions are equivalent.
Exercise: Prove that the expressions NOT (x OR y) and (NOT x) AND (NOT y) are equivalent.
Looking over some of the answers, I think I can explain it better by using conditions that are actually related to each other.
DeMorgan's Law refers to the fact that there are two identical ways to write any combination of two conditions - specifically, the AND combination (both conditions must be true), and the OR combination (either one can be true). Examples are:
Part 1 of DeMorgan's Law
Statement: Alice has a sibling.
Conditions: Alice has a brother OR Alice has a sister.
Opposite: Alice is an only child (does NOT have a sibling).
Conditions: Alice does NOT have a brother, AND she does NOT have a sister.
In other words: NOT [A OR B] = [NOT A] AND [NOT B]
Part 2 of DeMorgan's Law
Statement: Bob is a car driver.
Conditions: Bob has a car AND Bob has a license.
Opposite: Bob is NOT a car driver.
Conditions: Bob does NOT have a car, OR Bob does NOT have a license.
In other words: NOT [A AND B] = [NOT A] OR [NOT B].
I think this would be a little less confusing to a 12-year-old. It's certainly less confusing than all this nonsense about truth tables (even I'm getting confused looking at all of those).
It is just a way to restate truth statements, which can provide simpler ways of writing conditionals to do the same thing.
In plain English:
When something is not This or That, it is also not this and not that.
When something is not this and that, it is also not this or not that.
Note: Given the imprecision of the English language on the word 'or' I am using it to mean a non-exclusive or in the preceding example.
For example the following pseudo-code is equivalent:
If NOT(A OR B)...
IF (NOT A) AND (NOT B)....
IF NOT(A AND B)...
IF NOT(A) OR NOT(B)...
If you're a police officer looking for underage drinkers, you can do one of the following, and De Morgan's law says they amount to the same thing:
FORMULATION 1 (A AND B)
If they're under the age
limit AND drinking an alcoholic
beverage, arrest them.
FORMULATION 2 (NOT(NOT A OR NOT B))
If they're over
the age limit OR drinking a
non-alcoholic beverage, let them go.
This, by the way, isn't my example. As far as I know, it was part of a scientific experiment where the same rule was expressed in different ways to find out how much of a difference it made in peoples' ability to understand them.
"He doesn't have either a car or a bus." means the same thing as "He doesn't have a car, and he doesn't have a bus."
"He doesn't have a car and a bus." means the same thing as "He either doesn't have a car, or doesn't have a bus, I'm not sure which, maybe he has neither."
Of course, in plain english "He doesn't have a car and a bus." has a strong implication that he has at least one of those two things. But, strictly speaking, from a logic standpoint the statement is also true if he doesn't have either of them.
Formally:
not (car or bus) = (not car) and (not bus)
not (car and bus) = (not car) or (not bus)
In english, 'or' tends to mean a choice, that you don't have both things. In logic, 'or' always means the same as 'and/or' in English.
Here's a truth table that shows how this works:
First case: not (cor or bus) = (not car) and (not bus)
c | b || c or b | not (c or b) || (not c) | (not b) | (not c) and (not b)
---+---++--------+--------------++---------+---------+--------------------
T | T || T | F || F | F | F
---+---++--------+--------------++---------+---------+--------------------
T | F || T | F || F | T | F
---+---++--------+--------------++---------+---------+--------------------
F | T || T | F || T | F | F
---+---++--------+--------------++---------+---------+--------------------
F | F || F | T || T | T | T
---+---++--------+--------------++---------+---------+--------------------
Second case: not (car and bus) = (not car) or (not bus)
c | b || c and b | not (c and b) || (not c) | (not b) | (not c) or (not b)
---+---++---------+---------------++---------+---------+--------------------
T | T || T | F || F | F | F
---+---++---------+---------------++---------+---------+--------------------
T | F || F | T || F | T | T
---+---++---------+---------------++---------+---------+--------------------
F | T || F | T || T | F | T
---+---++---------+---------------++---------+---------+--------------------
F | F || F | T || T | T | T
---+---++---------+---------------++---------+---------+--------------------
Draw a simple Venn diagram, two intersecting circles. Put A in the left and B in the right. Now (A and B) is obviously the intersecting bit. So NOT(A and B) is everything that's not in the intersecting bit, the rest of both circles. Colour that in.
Draw another two circles like before, A and B, intersecting. Now NOT(A) is everything that's in the right circle (B), but not the intersection, because that's obviously A as well as B. Colour this in. Similarly NOT(B) is everything in the left circle but not the intersection, because that's B as well as A. Colour this in.
Two drawings look the same. You've proved that NOT(A and B) = NOT(A) or NOT(B). T'other case is left as an exercise for the student.
DeMorgan's Law allows you to state a string of logical operations in different ways. It applies to logic and set theory, where in set theory you use complement for not, intersection for and, and union for or.
DeMorgan's Law allows you to simplify a logical expression, performing an operation that is rather similar to the distributive property of multiplication.
So, if you have the following in a C-like language
if !(x || y || z) { /* do something */ }
It is logically equivalent to:
if (!x && !y && !z)
It also works like so:
if !(x && !y && z)
turns into
if (!x || y || !z)
And you can, of course, go in reverse.
The equivalence of these statements is easy to see using something called a truth table. In a truth table, you simply lay out your variables (x, y, z) and list all the combinations of inputs for these variables. You then have columns for each predicate, or logical expression, and you determine for the given inputs, the value of the expression. Any university curriculum for computer science, computer engineering, or electrical engineering will likely drive you bonkers with the number and size of truth tables you must construct.
So why learn them? I think the biggest reason in computing is that it can improve readability of larger logical expressions. Some people don't like using logical not ! in front of expressions, as they think it can confuse someone if they miss it. The impact of using DeMorgan's Law on the gate level of chips is useful, however, because certain gate types are faster, cheaper, or you're already using a whole integrated circuit for them so you can reduce the number of chip packages required for the outcome.
Not sure why I've retained this all these years, but it has proven useful on a number of occasions. Thanks to Mr Bailey, my grade 10 math teacher. He called it deMorgan's Theorem.
!(A || B) <==> (!A && !B)
!(A && B) <==> (!A || !B)
When you move the negation in or out of the brackets, the logical operator (AND, OR) changes.