Why does Feynman refer to this reversible gate as a NAND? - boolean-logic

In chapter 2 of The Pleasure of Finding Things Out, Richard P. Feynman discusses the physical limitations associated with building computers of very small size. He introduces the concept of reversible logic gates:
The great discovery of Bennett and, independently, of Fredkin is that it is possible to do computation with a different kind of fundamental gate unit, namely, a reversible gate unit. I have illustrated their idea---with a unit which I could call a reversible NAND gate.
He goes on to describe the behaviour of what he calls a reversible NAND gate:
It has three inputs and three outputs. Of the outputs, two, A' and B', are the same as two of the inputs, A and B, but the third input works this way. C' is the same as C unless A and B are both 1, in which case C it changes whatever C is. For instance, if C is 1 it is changed to 0, if C is 0 it is changed to 1---but these changes only happen if both A and B are 1.
The book contains a diagram of that reversible gate, which I've attached below (link):
The truth table of a classical, irreversible NAND gate (inputs: A, B; output: C) is as follows:
If I understand Feynman's description correctly, the truth table of what Feynman refers to as a reversible NAND gate should be as follows:
However, I don't understand why Feynman is calling his gate a NAND. How is one supposed to derive the result of NAND(A,B) with his gate? It seems to me that NAND(A,B) cannot be derived directly from any of the three outputs (A',B',C'). NAND(A,B) is given by XOR(C,C'), but that would require an additional XOR gate. So why is Feynman calling his gate a NAND gate?

As others have mentioned, when input C = 1, then output C' = A NAND B.
The key, however, is that no information is destroyed. Given the outputs, the inputs can be determined. Contrast this with a normal NAND gate where the input cannot be determined given only the output; information is destroyed. With the reversible NAND gate, not information is lost. No information loss is significant because theoretically there then can be no energy loss.

If you leave C set, then C' is A NAND B.
(I haven't read the paper, but I'm guessing that C is what makes the gate "reversible" - with C set it's a NAND gate, and with C clear it's an AND gate.)

Related

Universal Quantification in Isabelle/HOL

It has come to my attention that there are several ways to deal with universal quantification when working with Isabelle/HOL Isar. I am trying to write some proofs in a style that is suitable for undergraduate students to understand and reproduce (that's why I'm using Isar!) and I am confused about how to express universal quantification in a nice way.
In Coq for example, I can write forall x, P(x) and then I may say "induction x" and that will automatically generate goals according to the corresponding induction principle. However, in Isabelle/HOL Isar, if I want to directly apply an induction principle I must state the theorem without any quantification, like this:
lemma foo: P(x)
proof (induct x)
And this works fine as x is then treated as a schematic variable, as if it was universally quantified. However, it lacks the universal quantification in the statement which is not very educational. Another way I have fund is by using \<And> and \<forall>. However, I can not directly apply the induction principle if I state the lemma in this way, I have to first fix the universally quantified variables... which again seems inconvenient from an educational point of view:
lemma foo: \<And>x. P(x)
proof -
fix x
show "P(x)"
proof (induct x)
What is a nice proof pattern for expressing universal quantification that does not require me to explicitly fix variables before induction?
You can use induct_tac, case_tac, etc. These are the legacy variant of the induct/induction and cases methods used in proper Isar. They can operate on bound meta-universally-quantified variables in the goal state, like the x in your second example:
lemma foo: "⋀x. P(x :: nat)"
proof (induct_tac x)
One disadvantage of induct_tac over induction is that it does not provide cases, so you cannot just write case (Suc x) and then from Suc.IH and show ?case in your proof. Another disadvantage is that addressing bound variables is, in general, rather fragile, since their names are often generated automatically by Isabelle and may change when Isabelle changes. (not in the case you have shown above, of course)
This is one of the reasons why Isar proofs are preferred these days. I would strongly advise against showing your students ‘bad’ Isabelle with the intention that it is easier for them to understand.
The facts are these: free variables in a theorem statement in Isabelle are logically equivalent to universally-quantified variables and Isabelle automatically converts them to schematic variables after you have proven it. This convention is not unique to Isabelle; it is common in mathematics and logic, and it helps to reduce clutter. Isar in particular tries to avoid explicit use of the ⋀ operator in goal statements (i.e. have/show; they still appear in assume).
Or, in short: free variables in theorems are universally quantified by default. I doubt that students will find this hard to understand; I certainly did not when I started with Isabelle as a BSc student. In fact, I found it much more natural to state a theorem as xs # (ys # zs) = (xs # ys) # zs instead of ∀xs ys zs. xs # (ys # zs) = (xs # ys) # zs.

Synthesizing Shift left by 1 with just NAND gates?

I have an algorithm that performs division of 2 64-bit unsigned integers using C bitwise operators (<<, &, ^, |, ~) in a loop.
Now I would like to eliminate shift left << operator completely to understand how this is done as I can already synthesize AND, OR, XOR, NOT using NAND gates.
Is it possible to perform left shift by 1 with JUST NAND gates too? I have read a little about flip-flops in electronics but I'm implementing this in pure software just to understand it.
I want to avoid using << or >> operators completely and do not want to use existing arithmetic operators from any computer language including assembly.
In hardware, you can implement left shift by 1 without any logic gates at all. Just wire the data lines like so:
If you want something more generic, you could implement a barrel shifter. This can be synthesised from multiplexers:
(source: books24x7.com)
which in turn can be synthesised from NAND gates.
Here is a comprehensive master's thesis on the subject: Barrel Shifter
Design,
Optimization, and
Analysis.

What are the x values for this circuit truthtable? ABC (3) inputs (Homework)

I usually try not ask for homework help but once again I am stuck. I've been going over and over my textbook but I am not able to figure this out. Emailed the instructor and all the help i get is "Check this page" and "check that page", so instead of just not doing it, I would like some advice so I am actually able to learn.
The "G" gate thing, is whats bugging me in the book there is no gate that looks like that so i have no idea what to do. Here's a picture of the question, basically I have to find the X values (outputs). The answer would be nice. But I highly would appreciate a little explaination of "why".
Really appreciate the help!
Ok, so you you're not quite sure what this G gate means from looking at the diagram. In a question like this (where there's something that doesn't make sense to you), it's helpful to start with what you do know.
From looking at the diagram I don't know what the G gate means. In fact I don't know anything about circuits (but I do know something about logic :) ). I start with the truth table that the author has generously given me the formulas for each gate. I notice that there are 3 operators (*,+,') which I know.
If you know what those operators mean, then you can derive the meaning of the G gate.
D looks like an AND
E looks like NAND, the nipple-ish thing is an inverter
F looks like NOT, a buffer with an inverter on the output
G looks like a NOR, an OR with inverted output
The unlabeled one looks like an OR.
That's a really bad drawing though.
Giving you the answer would only cheat you out of your education and this stuff is important. There are 16 logical connectives for binary functions and they're all . . . logical. They make sense.
AND means when both inputs are true the output will be true. "If A and B = 1 output is 1"
OR means if any of the inputs are true the output will be true. "if A or B = 1 output is 1"
NOT means if the input is true the output is false.
XOR means if either input is true, but not both, the output will be true. "If A or B = 1 output is 1 unless both A and B = 1"
AND, OR, and XOR can all have inverters on their outputs which reverses their meanings. When they're supposed to output true they'll output false and when they're supposed to output false they'll output true.
The headings in the table are using * to mean AND, + to mean OR, and ' to mean "invert the symbol on the left".
D is A AND B, so if A and B are true, then put a 1 in the column, the rest of the column is false.
E is B NAND C, so if B and C are true, then put a 0 in the column, the rest of the column is true.
F is NOT C, so put the opposite of C in the column.
G is NOT((A AND B) OR (NOT C)), or if you look at the schematic and think about the formulas a bit you'll see that it's NOT(D OR F). You should be able to figure this out on your own now.
X is G OR E, There's a more complicated formula for it that traces through the circuit like the formula for G, but if you need it to prove your work you'll have to talk to your teacher. You'll probably get more help asking questions that show you put in effort.
I wrote an article about Logical functions in JavaScript that includes schematics. If you memorize the function tables at the top of the article it'll help you a lot when dealing with digital logic. Bonus points for associating the function number with the function name, you'll have memorized the output column of the functions truth table. They've taught you to count in binary right? Anyway, here's the article: http://matthewkastor.blogspot.com/2013/10/logical-functions-in-javascript.html It's not so important for your immediate question but will definitely do you good to read it. Oh, inputs can be inverted as well so don't let that throw you off.

Examples of monoids/semigroups in programming

It is well-known that monoids are stunningly ubiquitous in programing. They are so ubiquitous and so useful that I, as a 'hobby project', am working on a system that is completely based on their properties (distributed data aggregation). To make the system useful I need useful monoids :)
I already know of these:
Numeric or matrix sum
Numeric or matrix product
Minimum or maximum under a total order with a top or bottom element (more generally, join or meet in a bounded lattice, or even more generally, product or coproduct in a category)
Set union
Map union where conflicting values are joined using a monoid
Intersection of subsets of a finite set (or just set intersection if we speak about semigroups)
Intersection of maps with a bounded key domain (same here)
Merge of sorted sequences, perhaps with joining key-equal values in a different monoid/semigroup
Bounded merge of sorted lists (same as above, but we take the top N of the result)
Cartesian product of two monoids or semigroups
List concatenation
Endomorphism composition.
Now, let us define a quasi-property of an operation as a property that holds up to an equivalence relation. For example, list concatenation is quasi-commutative if we consider lists of equal length or with identical contents up to permutation to be equivalent.
Here are some quasi-monoids and quasi-commutative monoids and semigroups:
Any (a+b = a or b, if we consider all elements of the carrier set to be equivalent)
Any satisfying predicate (a+b = the one of a and b that is non-null and satisfies some predicate P, if none does then null; if we consider all elements satisfying P equivalent)
Bounded mixture of random samples (xs+ys = a random sample of size N from the concatenation of xs and ys; if we consider any two samples with the same distribution as the whole dataset to be equivalent)
Bounded mixture of weighted random samples
Let's call it "topological merge": given two acyclic and non-contradicting dependency graphs, a graph that contains all the dependencies specified in both. For example, list "concatenation" that may produce any permutation in which elements of each list follow in order (say, 123+456=142356).
Which others do exist?
Quotient monoid is another way to form monoids (quasimonoids?): given monoid M and an equivalence relation ~ compatible with multiplication, it gives another monoid. For example:
finite multisets with union: if A* is a free monoid (lists with concatenation), ~ is "is a permutation of" relation, then A*/~ is a free commutative monoid.
finite sets with union: If ~ is modified to disregard count of elements (so "aa" ~ "a") then A*/~ is a free commutative idempotent monoid.
syntactic monoid: Any regular language gives rise to syntactic monoid that is quotient of A* by "indistinguishability by language" relation. Here is a finger tree implementation of this idea. For example, the language {a3n:n natural} has Z3 as the syntactic monoid.
Quotient monoids automatically come with homomorphism M -> M/~ that is surjective.
A "dual" construction are submonoids. They come with homomorphism A -> M that is injective.
Yet another construction on monoids is tensor product.
Monoids allow exponentation by squaring in O(log n) and fast parallel prefix sums computation. Also they are used in Writer monad.
The Haskell standard library is alternately praised and attacked for its use of the actual mathematical terms for its type classes. (In my opinion it's a good thing, since without it I'd never even know what a monoid is!). In any case, you might check out http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html for a few more examples:
the dual of any monoid is a monoid: given a+b, define a new operation ++ with a++b = b+a
conjunction and disjunction of booleans
over the Maybe monad (aka "option" in Ocaml), first and last. That is,first (Just a) b = Just a
first Nothing b = band likewise for last
The latter is just the tip of the iceberg of a whole family of monoids related to monads and arrows, but I can't really wrap my head around these (other than simply monadic endomorphisms). But a google search on monads monoids turns up quite a bit.
A really useful example of a commutative monoid is unification in logic and constraint languages. See section 2.8.2.2 of 'Concepts, Techniques and Models of Computer Programming' for a precise definition of a possible unification algorithm.
Good luck with your language! I'm doing something similar with a parallel language, using monoids to merge subresults from parallel computations.
Arbitrary length Roman numeral value computation.
https://gist.github.com/4542999

What are Boolean Networks?

I was reading this SO question and I got intrigued by what are Boolean Networks, I've looked it up in Wikipedia but the explanation is too vague IMO. Can anyone explain me what Boolean Networks are? And if possible with some examples too?
Boolean networks represent a class of networks where the nodes have states and the edges represent transitions between states. In the simplest case, these states are either 1 or 0 – i.e. boolean.
Transitions may be simple activations or inactivations. For example, consider nodes a and b with an edge from a to b.
f
a ------> b
Here, f is a transition function. In the case of activation, f may be defined as:
f(x) = x
i.e. b's value is 1 if and only if a's value is 1. Conversely, an inactivation (or repression) might look like this:
f(x) = NOT x
More complex networks use more involved boolean functions. E.g. consider:
a b
\ /
\ /
\ /
v
c
Here, we've got edges from a to c and from b to c. c might be defined in terms of a and b as follows.
f(a, b) = a AND NOT b
Thus, c is activated only if a is active and b is inactive, at the same time.
Such networks can be used to model all kinds of relations. One that I know of is in systems biology where they are used to model (huge) interaction networks of chemicals in living cells. These networks effectively model how certain aspects of the cells work and they can be used to find deficiencies, points of attack for drugs and similarities between unrelated components that point to functional equivalence. This is fundamentally important in understanding how life works.