Octal full adder How to - binary

I have this project listen below and im not sure where to start maybe someone can give me a few pointers or perhaps point me in the right direction of starting this?
Thanks!!
Input: A, B = octal digits (see representation below); Cin = binary digit
Output: S = octal digit (see representation below); Cout = binary digit
Task: Using binary FAs, design a circuit that acts as an octal FA. More specifically,
this circuit would input the two octal digits A, B, convert them into binary numbers, add
them using only binary FAs, convert the binary result back to octal, and output the sum as
an octal digit, and the binary carry out.
Input/Output binary representation of octal digits
Every octal digit will be represented using the following 8-bit binary representation:
Octal 8-bit Input Lines:
Digit: 0 1 2 3 4 5 6 7
0 1 0 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0
2 0 0 1 0 0 0 0 0
3 0 0 0 1 0 0 0 0
4 0 0 0 0 1 0 0 0
5 0 0 0 0 0 1 0 0
6 0 0 0 0 0 0 1 0
7 0 0 0 0 0 0 0 1
You are required to design the circuit in a structured way.

Ok, so essentially you're being asked to design a 8-to-3 encoder and a 3-to-8 decoder. Because you're given FAs to work with that's not the point of the assignment.
First we need to define how an encoder and decoder function. So we construct a truth table:
Encoder:
Input | Output
01234567 | 421
-----------------
10000000 | 000
01000000 | 001
00100000 | 010
00010000 | 011
00001000 | 100
00000100 | 101
00000010 | 110
00000001 | 111
and the decoder is just the reverse of that.
Next, how do we construct our encoder? Well, we can simply attack it one bit at a time.
So for the 1s digit we have if input bit 1, 3, 5 or 7 is set then it's 1, otherwise it's 0. So we just need a giant OR with 4 inputs connected to 1, 3, 5 and 7.
For the 2s digit we need the OR gate connected to 2, 3, 6, 7. Finally for the 4s gate, connect them to 4, 5, 6, 7. This doesn't do any error checking to make sure extra bits aren't set. Though, the behavior in that case seems to be undefined by spec, so it's probably OK.
Then you take your three lines and feed them to your adders. This is easy so I won't get into it.
Finally you need a decoder, this is a bit more tricky than the encoder.
Let's look at the decoder truth table:
Input | Output
421 | 01234567
----------------
000 | 10000000
001 | 01000000
010 | 00100000
011 | 00010000
100 | 00001000
101 | 00000100
110 | 00000010
111 | 00000001
This time we can't just use 3 or gates and call it a day.
Let's write this down in C-like code:
if (!input[0] && !input[1] && !input[2])
output[0] = 1
if (input[0] && !input[1] && !input[2])
output[1] = 1
if (!input[0] && input[1] && !input[2])
output[2] = 1
if (input[0] && input[1] && !input[2])
output[3] = 1
if (!input[0] && !input[1] && input[2])
output[4] = 1
if (input[0] && !input[1] && input[2])
output[5] = 1
if (!input[0] && input[1] && input[2])
output[6] = 1
if (input[0] && input[1] && input[2])
output[7] = 1
So, it looks like we're going to be using 8 3 input AND gates, and three NOT gates!
This one is a bit more complicated, so I made an example implementation:

If the conversion is to be done by hand in class, you can try the following way.
Conversion of Octal to Binary:
To convert octal to binary, replace each octal digit by its binary representation.
Example: Convert 518 to binary:
58 = 1012
18 = 0012
Therefore, 518 = 101 0012.
Conversion of Binary to Octal:
The process is the reverse of the previous algorithm. The binary digits are grouped by threes, starting from the decimal point(if present) or the last digit and proceeding to the left and to the right. Add leading 0s (or trailing zeros to the right of decimal point) to fill out the last group of three if necessary. Then replace each trio with the equivalent octal digit.
Example, convert binary 1010111100 to octal:
(Adding two leading zero's, the number is 001010111100)
001 = 1, 010 = 2, 111 = 7, 100 = 4
Therefore, 1010111100 = 1274

To convert to and from octal you can use an encoder & decoder pair (http://www.asic-world.com/digital/combo3.html). The 3 bit adder can be made from chaining the 3 FAs.

Related

Special cases of modulo operator

I included a function (a%b + b) % b in some old cold and remember concluding that this was due to some special cases of a%b that I needed to be careful about. a and b are c ints and % is the c modulo operator. Now I am having trouble seeing where these two expressions ever differ. Are they completely equivalent?
The mathematical long division demands that the remainder is zero or positive, a=q*b+r with 0 <= r < b.
In the computer implementations of this operation it is possible that a%b is negative. Thus adding b then gives the non-negative remainder. To be universally useful you either need an if-branching or another remainder operation for the case where a%b already was non-negative.
The % operator does not implement a true modulo. In fact,
a ≥ 0 -> a % b = a mod b
a < 0 -> a % b = - ((-a) mod b)
Now,
a -4 -3 -2 -1 0 1 2 3 4
a mod 4 0 1 2 3 0 1 2 3 0
a % 4 0 -3 -2 -1 0 1 2 3 0
(a % 4 + 4) % b 0 1 2 3 0 1 2 3 0
Unfortunately, this doubles the cost of the modulo, which is significant.

Good explanation on why x-1 "looks" the way it does in binary

Let's take the number 28 in binary:
0b11100 # 28
If we subtract 1 from the number it looks like this:
0b11011 # 27
How I would explain how it 'looks' is that when subtracting 1 from a number, the right-most 1-bit is set to zero and all zeros after it are set to one. For example:
0b10101 - 1
= 0b10100
0b01000 - 1
= 0b00111
0b10000000 - 1
= 0b01111111
What would be the best explanation as to why this occurs though? I'm sure it's a property of binary twos complement, but I'm trying to figure out the best way to explain this to myself so that I can gain a deeper understanding of it.
Binary numbers have general form of N = dn x b^n + dn-1 x b^n-1… d1 x b^1 + d0 x b^0 where b is a base (2), d is a digit < base (0, 1) and n is position.
We write down binary numbers without b (because we know that's always 2) and also without its n exponent which goes implicitly from 0 for least significant digit (rightmost), 1 next to rightmost, etc.
For example your number 28 is 1x 2^4 + 1x 2^3 + 1x 2^2 + 0x 2^1 + 0x 2^0 = 1x 16 + 1x 8 + 1x 4 + 0x 2 + 0x 1 .
In binary:
1 - 1 = 0
0 - 1 = 1 and you carry that - 1 to the next position on left (same as when you do 10 - 1 in decimal, 0 - 1 is 9 and carry - 1 to order of tenths)
When subtracting 1 you go from the rightmost position, if there's 0 you turn it to 1 and carry subtraction up to next (left) position (and that chains all the way left until you find position where you can subtract without affecting higher position)
0b01000 - 1 can be written as 0x 2^4 + 1x 2^3 + 0x 2^2 + 0x 2^1 + 0x 2^0 - 1 x 2^0. In plain decimal that is 8 - 1 = 7 and 7 in binary is 0x 2^4 + 0x 2^3 + 1x 2^2 + 1x 2^1 + 1x 2^0 (4 + 2 + 1)
It does not matter what base you are in, the math does not change:
1000
- 0001
========
This is base 10, easier to see:
1 0 0 0
- 0 0 0 1
=============
We start in the ones column (base to the power 0), the top number is smaller than the bottom so we have to borrow, but what we find is that next column does not have anything and so on so we have to work over until we can borrow something, that value is base larger than the column it is in so if you borrow from the hundreds column into the tens column that is 10 tens so:
So first borrow:
0 10 0 0
- 0 0 0 1
=============
Second borrow:
0 9 10 0
- 0 0 0 1
=============
Third borrow:
0 9 9 10
- 0 0 0 1
=============
And now we can work the base to the power one column:
0 9 9 10
- 0 0 0 1
=============
9
And in this case can easily finish it up:
0 9 9 10
- 0 0 0 1
=============
0 9 9 9
So base 5:
1 0 0 0
- 0 0 0 1
===================
0 5 0 0
- 0 0 0 1
===================
0 4 5 0
- 0 0 0 1
===================
0 4 4 5
- 0 0 0 1
===================
0 4 4 5
- 0 0 0 1
===================
0 4 4 4
And base 2:
1 0 0 0
- 0 0 0 0
==============
0 10 0 0
- 0 0 0 0
==============
0 1 10 0
- 0 0 0 0
==============
0 1 1 10
- 0 0 0 0
==============
0 1 1 10
- 0 0 0 0
==============
0 1 1 1
Twos complement comes into play when you actually implement this in logic, we know from elementary programming classes that when we talk about "twos complement" we learn to "invert and add one" to negate a number. And we know from grade school math that x - y = x + (-y) so:
0
1000
- 0001
=======
This is the same as:
1 <--- add one
1000
+ 1110 <--- invert
=======
Finish:
10001
1000
+ 1110
=======
0111
So for subtraction you invert/ones complement the second operand and the carry in and feed these to an adder. Some architectures invert the carry out and call it a borrow, some just leave it unmodified. When doing it this way as we see above the carry out is a 1 if there was NO borrow. It is a zero if there was a borrow.
I believe this is a base 2 thing only due to having only zero or one. How do you invert a base 10 number? 1000 - 1 = 1000 + 9998 + 1, hmm actually that works.
So base 10 100 - 1 = 99, base 9 100 - 1 = 88, base 8 (octal) 100 - 1 = 77, base 7 100 - 1 = 66 and so on.

Decimal to half-precision floating point

I'm currently trying to convert 44/7 to half-precision floating point format.
I'm not sure if I've done it correctly so far, so I'd really appreciate it if someone could have a look at it.
44/7 = 6,285714285714...
6 in dual -> 110;
0.285714 * 2 = 0,571428 -> 0
0.571428 * 2 = 1.142856 -> 1
0.142856 * 2 = 0.285714 -> 0
... -> 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1...
-> 110, 01001001001001
-> 1,1001001001001001 -> exponent: 2;
Bias + Exponent : 2+15 = 17 => 1 0 0 0 1
All stitched together: 0 1 0 0 0 1 1 0 0 1 0 0 1 0 0 1
I've never converted decimal to 16bit IEEE754, is this the correct way of converting it?
Thanks a lot!
Correct. As you might expect, it is quantized to 6.28515625.
0100011001001001(base 2) = 4649(base 16)
6.2857142857139996
= H(4649)
= F(40C92492)
= D(40192492 49249107)
= A(0X1.92492492491070P+2)
6.28515625
= H(4649)
= F(40C92000)
= D(40192400 00000000)
= A(0X1.92400000000000P+2)
Other data points:
+0. 0000
-0. 8000
-1. BC00
+1. 3C00
+2. 4000
+4. 4400
+8. 4800
+16. 4C00
+32768. 7800
+Max 7BFF 65504
+.5f 3800
+.25f 3400
+.125f 3000
+.0625f 2C00
+MinNorm 0400 +6.103515625e-05
-MinNorm 8400 -6.103515625e-05
+MinDenorm 0001 +5.9604644775390625e-08
-MinDenorm 8001 -5.9604644775390625e-08
+Infinity 7C00
-Infinity FC00
+NaN(0) 7E00
-NaN(0) FE00

Is it possible to use logarithms to convert numbers to binary?

I'm a CS freshman and I find the division way of finding a binary number to be a pain. Is it possible to use log to quickly find 24, for instance, in binary?
If you want to use logarithms, you can.
Define log2(b) as log(b) / log(2) or ln(b) / ln(2) (they are the same).
Repeat the following:
Define n as the integer part of log2(b). There is a 1 in the nth position in the binary representation of b.
Set b = b - 2n
Repeat first step until b = 0.
Worked example: Converting 2835 to binary
log2(2835) = 11.47.. => n = 11
The binary representation has a 1 in the 211 position.
2835 - (211 = 2048) = 787
log2(787) = 9.62... => n = 9
The binary representation has a 1 in the 29 position.
787 - (29 = 512) = 275
log2(275) = 8.10... => n = 8
The binary representation has a 1 in the 28 position.
275 - (28 = 256) = 19
log2(19) = 4.25... => n = 4
The binary representation has a 1 in the 24 position.
19 - (24 = 16) = 3
log2(3) = 1.58.. => n = 1
The binary representation has a 1 in the 21 position.
3 - (21 = 2) = 1
log2(1) = 0 => n = 0
The binary representation has a 1 in the 20 position.
We know the binary representation has 1s in the 211, 29, 28, 24, 21, and 20 positions:
2^ 11 10 9 8 7 6 5 4 3 2 1 0
binary 1 0 1 1 0 0 0 1 0 0 1 1
so the binary representation of 2835 is 101100010011.
From a CS perspective, binary is quite easy because you usually only need to go up to 255. Or 15 if using HEX notation. The more you use it, the easier it gets.
How I do it on the fly, is by remembering all the 2 powers up to 128 and including 1. (The presence of the 1 instead of 1.4xxx possibly means that you can't use logs).
128,64,32,16,8,4,2,1
Then I use the rule that if the number is bigger than each power in descending order, that is a '1' and subtract it, else it's a '0'.
So 163
163 >= 128 = '1' R 35
35 !>= 64 = '0'
35 >= 32 = '1' R 3
3 !>= 16 = '0'
3 !>= 8 = '0'
3 !>= 4 = '0'
3 >= 2 = '1' R 1
1 >= 1 = '1' R 0
163 = 10100011.
It may not be the most elegant method, but when you just need to convert something ad-hoc thinking of it as comparison and subtraction may be easier than division.
Yes, you have to loop through 0 -> power which is bigger than you need and then take the remainder and do the same, which is a pain too.
I would suggest you trying recursion approach of division called 'Divide and Conquer'.
http://web.stanford.edu/class/archive/cs/cs161/cs161.1138/lectures/05/Small05.pdf
But again, since you need a binary representation, I guess unless you use ready utils, division approach is the simplest one IMHO.

What are w-bit words?

What are w-bit words in computer architecture ?
For two 7 bit words
1011001 = A
1101011 = B , how does multiplication returns
10010100110011 ?
Isn't there simple binary multiplication involved in these ?
Please provide an example.
w-bit is just the typical nomenclature for n-bit because w is usually short for word size
Both adding and multiplying are done just the same as in decimal (base 10). You just need to remember this truth table:
Multiplying
-----------
0 x 0 = 0
0 x 1 = 0
1 x 0 = 0
1 x 1 = 1
Adding
-----------
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0 (w/ carry)
First adding. To add, you add just like you would in normal arithmetic, except follow the truth table above:
00000101 = 5
+ 00000011 = 3
--------------
00001000 = 8
How this works is that you start from the right and work left. 1 + 1 = 0, but you carry a 1 over to the next column. So the next column is 0 + 1, which would be 1, but since you carried another 1 from the previous column, its really 1 + 1, which is 0. You carry a 1 over the next column, which is 1 + 0, but really 1 + 1 because of the carry. So 0 again and finally move the 1 to the next column, which is 0 + 0, but because of our carry, becomes 1 + 0, which is 1. So our answer is 1000, which is 8 in decimal. 5 + 3 = 8, so we know we are right.
Next, multiplying:
00000101 = 5
x 00000011 = 3
----------
101 = 5
+ 1010 = 10
----------
1111 = 15
How this works is you multiply the top number 00000101 by the right most digit in the second row. So 00000011 is our second row and 1 is the right most digit, so 00000101 times 1 = 101. Next you put a 0 placeholder in the right most column below it, just like in normal multiplication. Then you multiply our top original number 00000101 by the next digit going left in our original problem 00000011. Again it produce 101. Next you simply add 101 + 1010 = 1111 ...That is the answer
Yes, it's simple binary multiplication:
>>> 0b1011001
89
>>> chr(_)
'Y'
>>> 0b1101011
107
>>> chr(_)
'k'
>>> ord('Y') * ord('k')
9523
>>> bin(_)
'0b10010100110011'
If you want to multiply, you simply do the multiplication the same as with decimal numbers, except that you have to add the carries in binary:
1011001
x1101011
-------
1011001
1011001.
0000000..
1011001...
0000000....
1011001.....
1011001......
--------------
10010100110011
w-bit words aren't anything by themselves. Assuming that the value of w has been previously defined in the context in which "w-bit word" is used, then it simply means a word that is composed of w bits. For instance:
A version of RC6 is more accurately specified as RC6-w/r/b where the word size
is "w" bits, encryption consists of a nonnegative number of rounds "r," and
"b" denotes the length of the encryption key in bytes. Since the AES
submission is targetted at w=32, and r=20, we shall use RC6 as shorthand to
refers to such versions.
So in the context of that document, a "w-bit word" is just a 32-bit value.
As for your multiplication, I'm not sure what you are asking. Google confirms the result as correct:
1011001 * 1101011 = 10010100110011