Transform an expression only using NAND logic gates - boolean-logic

How do I transform this expression NOT(a) OR NOT(b) AND NOT(c) using only NAND gates?? I was trying to do it, but I don't find the correct answer.

Firstly, all logic equations can be represented by NAND gates. Consider a NOT.... Just tie the two inputs of a NAND together and you have a NOT! A NOT NAND is an AND and an OR is two NOT gates driving a NAND! It is all quite cool...
Do a bit of research on mathematical logic and you should find details of various conversion techniques.

You can apply following formulas step by step:
NOT(A) OR NOT(B) = A NAND B
A AND B = NOT(A NAND B)
NOT(A) = A NAND A

I would suggest using the Rott's grids. It is a graphical application of De Morgan's laws and it is useful for optimizing the design using only combinations of NOR and NAND gates. Usually it can be done really quickly without worrying about making a mistake.
Every Rott's grid is created according to these three principles:
the De Morgan's laws are respected by switching between the ⋅ (the conjunction) and the + (the disjunction) and dividing them with the horizontal lines (negations),
the vertical lines separate the individual inputs, changing the number of logic gates' inputs,
on the last line are placed the input variables either in their prime or negated form – that is determined individually by the number of horizontal lines above them (the initial ¬ also counts).
This is the given expression in a matching Rott's grid:
f = ¬a + ¬b ⋅ ¬c
--------------
⋅ +
| --------
| ⋅
| |
a | ¬b | ¬c
As you can see, the original expression was transformed to an equivalent, that is using only two 2-input NAND gates (and some inverters, that can be replaced also by NAND gates). The grid is just a graphical representation of applying the De Morgan's laws on the original expression:
f = (¬a + (¬b ⋅ ¬c))
= ¬(¬(¬a + (¬b ⋅ ¬c))) //double negation law: ¬(¬x) = x
= ¬(¬¬a ⋅ ¬(¬b ⋅ ¬c)) //De Morgan's law
= ¬(a ⋅ ¬(¬b ⋅ ¬c)) //double negation law: ¬¬a = a
f = nand(a, nand(not(b), not(c)))
(The .gif picture was generated using an online latex tool)

Related

Writing an expression using only NAND, OR, XNOR

I have a 2-1 mux and I'm trying to write z = s'd0 + sd1
using only NAND, XNOR, and OR gates (not necessarily all of them).
I tried simplifying it and what I ended up with is z = NAND(NAND(s', d0), NAND(s, d1)), but I can't use NOT ('), so is there a way to write NAND(s', d0) without the NOT?
You can build NOT from NAND:
NAND(X,X) == NOT(X)
NAND gate is an universal gate; you can use it to make any other gate.
s' = nand(s,s)
Simple solution
Full version of the solution proposed by others is (A NAND S) NAND (B NAND (S NAND S)) .
By the way, NOT X could also be expressed as X NAND 1, not only as X NAND X.
Advanced solution
(S OR (A XNOR B)) XNOR A
The latter solution is definitely more interesting:
It uses a fewer number of gates (though of two different types).
It uses not functionally complete set of gates (thereby is less trivial).
How to find the latter solution?
Construct the Zhegalkin polynomial of 2:1 mux and simplify it slightly: (S AND (A XOR B)) XOR B.
Note that the boolean function dual to 2:1 mux is also 2:1 mux, but for swapped input signals.
Now "dualize" the polynomial (replace AND and XOR with OR and XNOR respectively) and swap A with B.

Synthesizing Shift left by 1 with just NAND gates?

I have an algorithm that performs division of 2 64-bit unsigned integers using C bitwise operators (<<, &, ^, |, ~) in a loop.
Now I would like to eliminate shift left << operator completely to understand how this is done as I can already synthesize AND, OR, XOR, NOT using NAND gates.
Is it possible to perform left shift by 1 with JUST NAND gates too? I have read a little about flip-flops in electronics but I'm implementing this in pure software just to understand it.
I want to avoid using << or >> operators completely and do not want to use existing arithmetic operators from any computer language including assembly.
In hardware, you can implement left shift by 1 without any logic gates at all. Just wire the data lines like so:
If you want something more generic, you could implement a barrel shifter. This can be synthesised from multiplexers:
(source: books24x7.com)
which in turn can be synthesised from NAND gates.
Here is a comprehensive master's thesis on the subject: Barrel Shifter
Design,
Optimization, and
Analysis.

Why does Feynman refer to this reversible gate as a NAND?

In chapter 2 of The Pleasure of Finding Things Out, Richard P. Feynman discusses the physical limitations associated with building computers of very small size. He introduces the concept of reversible logic gates:
The great discovery of Bennett and, independently, of Fredkin is that it is possible to do computation with a different kind of fundamental gate unit, namely, a reversible gate unit. I have illustrated their idea---with a unit which I could call a reversible NAND gate.
He goes on to describe the behaviour of what he calls a reversible NAND gate:
It has three inputs and three outputs. Of the outputs, two, A' and B', are the same as two of the inputs, A and B, but the third input works this way. C' is the same as C unless A and B are both 1, in which case C it changes whatever C is. For instance, if C is 1 it is changed to 0, if C is 0 it is changed to 1---but these changes only happen if both A and B are 1.
The book contains a diagram of that reversible gate, which I've attached below (link):
The truth table of a classical, irreversible NAND gate (inputs: A, B; output: C) is as follows:
If I understand Feynman's description correctly, the truth table of what Feynman refers to as a reversible NAND gate should be as follows:
However, I don't understand why Feynman is calling his gate a NAND. How is one supposed to derive the result of NAND(A,B) with his gate? It seems to me that NAND(A,B) cannot be derived directly from any of the three outputs (A',B',C'). NAND(A,B) is given by XOR(C,C'), but that would require an additional XOR gate. So why is Feynman calling his gate a NAND gate?
As others have mentioned, when input C = 1, then output C' = A NAND B.
The key, however, is that no information is destroyed. Given the outputs, the inputs can be determined. Contrast this with a normal NAND gate where the input cannot be determined given only the output; information is destroyed. With the reversible NAND gate, not information is lost. No information loss is significant because theoretically there then can be no energy loss.
If you leave C set, then C' is A NAND B.
(I haven't read the paper, but I'm guessing that C is what makes the gate "reversible" - with C set it's a NAND gate, and with C clear it's an AND gate.)

Why do different operators have different associativity?

I've got to the section on operators in The Ruby Programming Language, and it's made me think about operator associativity. This isn't a Ruby question by the way - it applies to all languages.
I know that operators have to associate one way or the other, and I can see why in some cases one way would be preferable to the other, but I'm struggling to see the bigger picture. Are there some criteria that language designers use to decide what should be left-to-right and what should be right-to-left? Are there some cases where it "just makes sense" for it to be one way over the others, and other cases where it's just an arbitrary decision? Or is there some grand design behind all of this?
Typically it's so the syntax is "natural":
Consider x - y + z. You want that to be left-to-right, so that you get (x - y) + z rather than x - (y + z).
Consider a = b = c. You want that to be right-to-left, so that you get a = (b = c), rather than (a = b) = c.
I can't think of an example of where the choice appears to have been made "arbitrarily".
Disclaimer: I don't know Ruby, so my examples above are based on C syntax. But I'm sure the same principles apply in Ruby.
Imagine to write everything with brackets for a century or two.
You will have the experience about which operator will most likely bind its values together first, and which operator last.
If you can define the associativity of those operators, then you want to define it in a way to minimize the brackets while writing the formulas in easy-to-read terms. I.e. (*) before (+), and (-) should be left-associative.
By the way, Left/Right-Associative means the same as Left/Right-Recursive. The word associative is the mathematical perspective, recursive the algorihmic. (see "end-recursive", and look at where you write the most brackets.)
Most of operator associativities in comp sci is nicked directly from maths. Specifically symbolic logic and algebra.

What are Boolean Networks?

I was reading this SO question and I got intrigued by what are Boolean Networks, I've looked it up in Wikipedia but the explanation is too vague IMO. Can anyone explain me what Boolean Networks are? And if possible with some examples too?
Boolean networks represent a class of networks where the nodes have states and the edges represent transitions between states. In the simplest case, these states are either 1 or 0 – i.e. boolean.
Transitions may be simple activations or inactivations. For example, consider nodes a and b with an edge from a to b.
f
a ------> b
Here, f is a transition function. In the case of activation, f may be defined as:
f(x) = x
i.e. b's value is 1 if and only if a's value is 1. Conversely, an inactivation (or repression) might look like this:
f(x) = NOT x
More complex networks use more involved boolean functions. E.g. consider:
a b
\ /
\ /
\ /
v
c
Here, we've got edges from a to c and from b to c. c might be defined in terms of a and b as follows.
f(a, b) = a AND NOT b
Thus, c is activated only if a is active and b is inactive, at the same time.
Such networks can be used to model all kinds of relations. One that I know of is in systems biology where they are used to model (huge) interaction networks of chemicals in living cells. These networks effectively model how certain aspects of the cells work and they can be used to find deficiencies, points of attack for drugs and similarities between unrelated components that point to functional equivalence. This is fundamentally important in understanding how life works.