Boolean Logic A'B + AB' - boolean-logic

I have a fairly simple question that i cannot find an example of online. I understand that this can simplify to A^B but I have not yet covered that section. What is the correct value of the boolean expression (A'B + AB')?

Lets look at the truth table
A B A'B AB' A'B + AB'
-----------------------------
0 0 0 0 0
0 1 1 0 1
1 0 0 1 1
1 1 0 0 0
This simply computes the xor of A and B.
and hence this is our answer.

The definition of the symbol XOR (^) is a^b = a'b + ab', i.e. one or the other but not both must be true for the expression to be true. Therefore there are no intermediate steps to convert between the two expressions. This is because a'b and ab' are prime implicants of the boolean function.

Another (not necessarily more simplified) way to define XOR is (A+B).(A'+B')
A B A+B A' B' A'+B' (A+B).(A'+B')
----------------------------------------
0 0 0 1 1 1 0
0 1 1 1 0 1 1
1 0 1 0 1 1 1
1 1 1 0 0 0 0

Related

Good explanation on why x-1 "looks" the way it does in binary

Let's take the number 28 in binary:
0b11100 # 28
If we subtract 1 from the number it looks like this:
0b11011 # 27
How I would explain how it 'looks' is that when subtracting 1 from a number, the right-most 1-bit is set to zero and all zeros after it are set to one. For example:
0b10101 - 1
= 0b10100
0b01000 - 1
= 0b00111
0b10000000 - 1
= 0b01111111
What would be the best explanation as to why this occurs though? I'm sure it's a property of binary twos complement, but I'm trying to figure out the best way to explain this to myself so that I can gain a deeper understanding of it.
Binary numbers have general form of N = dn x b^n + dn-1 x b^n-1… d1 x b^1 + d0 x b^0 where b is a base (2), d is a digit < base (0, 1) and n is position.
We write down binary numbers without b (because we know that's always 2) and also without its n exponent which goes implicitly from 0 for least significant digit (rightmost), 1 next to rightmost, etc.
For example your number 28 is 1x 2^4 + 1x 2^3 + 1x 2^2 + 0x 2^1 + 0x 2^0 = 1x 16 + 1x 8 + 1x 4 + 0x 2 + 0x 1 .
In binary:
1 - 1 = 0
0 - 1 = 1 and you carry that - 1 to the next position on left (same as when you do 10 - 1 in decimal, 0 - 1 is 9 and carry - 1 to order of tenths)
When subtracting 1 you go from the rightmost position, if there's 0 you turn it to 1 and carry subtraction up to next (left) position (and that chains all the way left until you find position where you can subtract without affecting higher position)
0b01000 - 1 can be written as 0x 2^4 + 1x 2^3 + 0x 2^2 + 0x 2^1 + 0x 2^0 - 1 x 2^0. In plain decimal that is 8 - 1 = 7 and 7 in binary is 0x 2^4 + 0x 2^3 + 1x 2^2 + 1x 2^1 + 1x 2^0 (4 + 2 + 1)
It does not matter what base you are in, the math does not change:
1000
- 0001
========
This is base 10, easier to see:
1 0 0 0
- 0 0 0 1
=============
We start in the ones column (base to the power 0), the top number is smaller than the bottom so we have to borrow, but what we find is that next column does not have anything and so on so we have to work over until we can borrow something, that value is base larger than the column it is in so if you borrow from the hundreds column into the tens column that is 10 tens so:
So first borrow:
0 10 0 0
- 0 0 0 1
=============
Second borrow:
0 9 10 0
- 0 0 0 1
=============
Third borrow:
0 9 9 10
- 0 0 0 1
=============
And now we can work the base to the power one column:
0 9 9 10
- 0 0 0 1
=============
9
And in this case can easily finish it up:
0 9 9 10
- 0 0 0 1
=============
0 9 9 9
So base 5:
1 0 0 0
- 0 0 0 1
===================
0 5 0 0
- 0 0 0 1
===================
0 4 5 0
- 0 0 0 1
===================
0 4 4 5
- 0 0 0 1
===================
0 4 4 5
- 0 0 0 1
===================
0 4 4 4
And base 2:
1 0 0 0
- 0 0 0 0
==============
0 10 0 0
- 0 0 0 0
==============
0 1 10 0
- 0 0 0 0
==============
0 1 1 10
- 0 0 0 0
==============
0 1 1 10
- 0 0 0 0
==============
0 1 1 1
Twos complement comes into play when you actually implement this in logic, we know from elementary programming classes that when we talk about "twos complement" we learn to "invert and add one" to negate a number. And we know from grade school math that x - y = x + (-y) so:
0
1000
- 0001
=======
This is the same as:
1 <--- add one
1000
+ 1110 <--- invert
=======
Finish:
10001
1000
+ 1110
=======
0111
So for subtraction you invert/ones complement the second operand and the carry in and feed these to an adder. Some architectures invert the carry out and call it a borrow, some just leave it unmodified. When doing it this way as we see above the carry out is a 1 if there was NO borrow. It is a zero if there was a borrow.
I believe this is a base 2 thing only due to having only zero or one. How do you invert a base 10 number? 1000 - 1 = 1000 + 9998 + 1, hmm actually that works.
So base 10 100 - 1 = 99, base 9 100 - 1 = 88, base 8 (octal) 100 - 1 = 77, base 7 100 - 1 = 66 and so on.

How to convert a 3 input AND gate into a NOR gate?

I know that I can say convert a 2-input AND gate into a NOR gate by simply inverting the two inputs because of DeMorgan's Theorem.
But how would you do the equivalent on a 3-input AND gate?
Say...
____
A___| \
B___| )___
C___|____ /
I'm trying to understand this because my homework asks me to take a circuit and convert it using NOR synthesis to only use nor gates, and I know how to do it with 2 input gates, but the gate with 3 inputs is throwing me for a spin.
DeMorgan's theorem for 2-input AND would produce:
AB
(AB)''
(A' + B')'
So, yes, the inputs are inverted and fed into a NOR gate.
DeMorgan's theorem for 3-input AND would similarly produce:
ABC
(ABC)''
(A' + B' + C')'
Which is, again, inputs inverted and fed into a (3-input) NOR gate:
___
A--O\ \
B--O ) )O---
C--O/___ /
#SailorChibi has truth tables that show equivalence.
If i haven't made any mistakes it is pretty much the same, invert all 3 of the inputs and you get a NOR
Table:
AND with inverted in is exact the same as
1 1 1 = 1
1 1 1 = 0
1 0 1 = 0
0 1 0 = 0
0 1 1 = 0
0 1 0 = 0
0 0 1 = 0
0 0 0 = 0
NOR with original input
0 0 0 = 1
0 0 1 = 0
0 1 0 = 0
1 0 1 = 0
1 0 0 = 0
1 0 1 = 0
1 1 0 = 0
1 1 1 = 0

Anyway to prove the boolean algebra theories 9 and 10?

I have a problem grasping the fact that A + AB = A and A(A + B) = A
Can anyone tell me how?
To understand this, look at the truth tables (1=true, 0=false):
"A" "B" "AB" "A+B" "A + AB" "A(A+B)"
0 0 0 0 0 0
0 1 0 1 0 0
1 0 0 1 1 1
1 1 1 1 1 1

Boolean Algebra simplification

is it possible to simply this Boolean function
(!A*!B*!C) + (!A*!B*C*!D) + (A*!B*!C*D) + (A*!B*C*!D) + (A*B*!C*!D)
I came up with this:
(!B*(!A*(!C+!D))+A*(C XOR D)) + (A*B*!C*!D)
Messy to look at, but there are fewer terms.
Look at the truth table:
A B C D X
0 0 0 0 1
0 0 0 1 1
0 0 1 0 1
0 0 1 1 0
0 1 0 0 0
0 1 0 1 0
0 1 1 0 0
0 1 1 1 0
1 0 0 0 0
1 0 0 1 1
1 0 1 0 1
1 0 1 1 0
1 1 0 0 1
1 1 0 1 0
1 1 1 0 0
1 1 1 1 0
It looks like you can take the three parts of the table where X = 1 and simplify this to the sum of three terms:
!A*!B*!(C*D) + A*!B*(C^D) + A*B*!C*!D
Note that I've use XOR (^) in the second term. If you can't use XOR then you'll need to expand the second term a little.
You can reduce the number of terms further by factoring out either !B or A for two of the terms, e.g.
!B*(!A*!(C*D) + A*(C^D)) + A*B*!C*!D
or:
!A*!B*!(C*D) + A*(!B*(C^D) + B*!C*!D)

octave matrix: replace 0's with 1's and replace 1's with 0's

I have a matrix of 0's and 1's, say:
0 1 0 0
0 0 1 0
1 0 0 0
I want to generate another matrix that replaces 0's with 1's and 1's with 0's:
1 0 1 1
1 1 0 1
0 1 1 1
Anyone know how to do this in Octave?
b = 1 - a;