Hardware design a 3 binary numbers adder - binary

I want to design a binary full adder to add 3 binary numbers ,
a typical cell of this adder would look like this
Can someone explain why we have 2 carries to the next bit ?
regards

Let's look at a particular formula: 0b11 + 0b11 + 0b11 == 0b1001.
The schematic of this would look like:
Adder 0 has the following properties:
Normal inputs can total to at most 0b11.
Carried inputs should always be 0b00.
The maximum output is 0b11 (One carry bit, one output bit).
Adder 1 has the following properties:
Normal inputs can total to at most 0b11.
Carried inputs can total to 0b01.
Maximum output is 0b100 (Two carry bits, one output bit).

Related

LC-3 algorithm for converting ASCII strings to Binary Values

Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway.  Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction.  For arithmetic & logic, it can do add, not, and.  That's pretty much it!  But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct.  Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction.  These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string.  It takes a string length count in R1 as parameter supplied by caller (not shown).  It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works.  The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach.  That yields a number between 0 and 9, which is used as an index into the first table Lookup10.  The value obtained from the table at that index position is basically the index × 10.  So this table is a × 10 table.  The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer.  It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow).  The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping.  Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions.  Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2     (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi.  You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit .  That might be done by repetitive addition.  Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.

Why does Xilinx's Multiplier IP product bitwidth have an extra bit?

Xilinx's complex multiplier IP documentation (PG104) has this to say about input and output bit-width setting:
Output Width: Selects the width of the output product real and imaginary
components. The values are automatically initialized to provide the full-precision
product when the A and B operand widths are set. The natural width of a complex
multiplication is the sum of the input widths plus one. If Output Width is set to be
less than this natural width, the least significant bits are truncated or rounded, as
selected by the next GUI field.
(Italics are mine.) So if I multiply an 8-bit number by another 8-bit number, it wants to have the full precision output be a 17-bit number. The inputs and outputs are assumed to be signed integers.
The largest magnitude signed number that 8 bits can represent is -128 (0x80). 128*128=16384 or 0x4000 which is 15 bits. Add a sign bit and we're safe with a 16-bit output.
The largest positive is 127 (0x3F). 127*127=16129 or 0x3F01. Again, safe with 16 bits.
What am I missing? Why do they insist on the extra bit?
It is a complex multiplication. That is sum of two multiplications.
pr = ar x br - ai x bi
pi = ar x bi + ai x br
The sum adds the extra bit.

Theory behind multiplying two numbers without operands

I have been reading a Elements of Programming Interview and am struggling to understand the passage below:
"The algorithm taught in grade-school for decimal multiplication does
not use repeated addition- it uses shift and add to achieve a much
better time complexity. We can do the same with binary numbers- to
multiply x and y we initialize the result to 0 and iterate through the
bits of x, adding (2^k)y to the result if the kth bit of x is 1.
The value (2^k)y can be computed by left-shifting y by k. Since we
cannot use add directly, we must implement it. We can apply the
grade-school algorithm for addition to the binary case, i.e, compute
the sum bit-by-bit and "rippling" the carry along.
As an example, we show how to multiply 13 = (1101) and 9 = (1001)
using the algorithm described above. In the first iteration, since
the LSB of 13 is 1, we set the result to (1001). The second bit of
(1101) is 0, so we move on the third bit. The bit is 1, so we shift
(1001) to the left by 2 to obtain (1001001), which we add to (1001) to
get (101101). The forth and final bit of (1101) is 1, so we shift
(1001) to the left by 3 to obtain (1001000), which we add to (101101)
to get (1110101) = 117.
My Questions are:
What is the overall idea behind this, how is it a "bit-by-bit" addition
where does (2^k)y come from
what does it mean by "left-shifting y by k"
In the example, why do we set result to (1001) just because the LSB of 13 is 1?
The algorithm relies on the way numbers are coded in binary.
Let A be an unsigned number. A is coded by a set of bits an-1an-2...a0 in such a way that A=∑i=0n-1ai×2i
Now, assume you have two numbers A and B coded in binary and you wand to compute A×B
B×A=B×∑i=0n-1ai×2i
=∑i=0n-1B×ai×2i
ai is equal to 0 or 1. If ai=0, the sum will not be modified. If ai=1, we need to add B×ai
So, we can simply deduce the multiplication algorithm
result=0
for i in 0 to n-1
if a[i]=1 // assumes a[i] is the ith bit
result = result + B * 2^i
end
end
What is the overall idea behind this, how is it a "bit-by-bit" addition
It is just an application of the previous method where you process successively every bit of the multiplicator
where does (2^k)y come from
As mentioned above from the way binary numbers are coded. If ith bit is set, then there is a 2i in the decomposition of the number.
what does it mean by "left-shifting y by k"
Left shift means "pushing" the bits leftwards and filling the "holes" with zeroes. Hence if number is 1101 and it is left shifted by three, it becomes 1101000.
This is the way to multiply the number by 2i (just as when "left shifting" by 2 a decimal number and putting zeroes at the right places is the way to multiply by 100=102)
In the example, why do we set result to (1001) just because the LSB of 13 is 1?
Because there is a 1 at right most position, that corresponds to 20. So we left shift by 0 and add it to the result that is initialized to 0.

Binary numbers addition

I have just started doing some binary number exercices to prepare for a class that i will start next month and i got the hang of all the conversion from decimal to binary and viceverca But now with the two letters 'a ' ' b' in this exercise i am not sure how can i apply that knowledge to add the bits with the following exercise
Given two Binary numbers a = (a7a6 ... a0) and b = (b7b6 ... b0).There is a clculator that can add 4-bit binary numbers.How many bits will be used to represent the result of a 4-bit addition? Why?
We would like to use our calculator to calculate a + b. For this we can put as many as eight bits (4 bits of the first and 4 bits of the second number) of our choice in the calculator and continue to use the result bit by bit
How many additions does our calculator have to carry out for the addition of a and b at most? How many bits is the result maximum long?
How many additions does the calculator have to perform at least For the result to be correct for all possible inputs a and b?
The number of bits needed to represent a 4-bit binary addition is 5. This is because there could be a carry-over bit that pushes the result to 5 bits.
For example 1111 + 0010 = 10010.
This can be done the same way as adding decimal numbers. From right to left just add the numbers of the same significance. If the two bits are 1+1, the result is 10 so that place becomes a zero and the 1 carries over to the next pair of bits, just like decimal addition.
With regard to the min/max number of step, these seems more like an algorithm specific question. Look up some different binary addition algorithms, like ripple-carry for instance, and it should give you a better idea of what is meant by the question.

Trying to understand why page sizes are a power of 2?

I read this:
Recall that paging is implemented by
breaking up an address into a page and
offset number. It is most efficient to
break the address into X page bits and
Y offset bits, rather than perform
arithmetic on the address to calculate
the page number and offset. Because
each bit position represents a power
of 2, splitting an address between
bits results in a page size that is a
power of 2.
I don't quite understand this answer, can anyone give a simpler explanation?
If you are converting a (linear) address to page:offset, you want to divide the address by the page size and take the integer answer as the page, and the reminder as the offset.
This is done using integer division and modulus (MOD, "%") operators in your programming language.
A computer represents an address as a number, stored as binary bits.
Here's an example address: 12 is 1100 in binary.
If the page size is 3, then we'd need to calculate 12/3 and 12%3 to find the page and offset (4:0 respectively).
However, if the page size is 4 (a power of 2), then 4 in binary is 100, and integer division and modulus can be computed using special 'shortcuts': you can strip the last two binary digits to divide, and you can keep only the last two binary digits for modulus. So:
12/4 == 12>>2 (shifting to remove the last two digits)
12%4 == 12&(4-1) (4-1=3 is binary 11, and the '&' (AND) operator only keeps those)
Working with powers of two leads to more efficient hardware so that's what hardware designers do. Consider a cpu with a 32-bit address, and an n-bit page number:
+----------------------+--------------------+
| page number (n bits) | byte offset (32-n) |
+----------------------+--------------------+
This address is sent to the virtual memory unit, which directly separates the page number and byte offset without any arithmetic operations at all. Ie, it treats the 32-bit value as an array of bits, and has (more or less) a wire running directly to each bit. This allows the memory hardware to extract the page number and byte offset in parallel, without performing any arithmetic operations.
At the same time, this approach requires splitting on a bit boundary, which leads directly to power-of-2 page sizes.
If you have n binary digits at your disposal, then you can encode 2n different values.
Given an address, your description states some bits will be used for the page, and some for the offset. As you're using a whole number of binary bits for the for offset Y, then the page size is naturally a power of 2, specifically 2Y.
Because the representation of the data. The page offset sets the size of the page, and since the data is represented in binary, you will have a number n of bits to define the offset, so you will have pages with a size of 2^n.
Let's suppose you have an address like this 10011001, and you split it in 1001:1001 for page:offset. Since you have 4 bits to define the offset, your page's size is 2⁴.
If it is not exact power of 2 then some memory address will be invalid. eg if the page size is 5 byte then to distinguish each byte we need 3 bit in the offset part of the address. Because using 2 bit only 4 byte can be addressed. But using 3 bit offset for 5 byte page keep two address unused. thats why page size should be .......
since all addresses are in binary and are divided into f and d, f=frame no, d=offset. due to be the size in power of 2 a human no need to go for huge mathematical calculation, just by watching the address bits you can identify the f and d. If page size was 2^n the in physical address last n bits represents the offset and remaining bits represent page numbers.
Hopefully this answer will help you.
I realized what I didn't get was how the page sizes are a power of 2. I found out afterwards:
At the smallest level, the page is 1 bit (0 or 1) and the offset is 1 bit (0 or 1). Combining these together, the page size would be 2 x 2 (or 2^2), because of the number of bits they both have (2 each, so 2 x 2).
Now if the page/offset were larger, then it would be n x n -- n being the number of bits they both have.
The page no. is always in the power of 2 as (2^n).
There are several reasons:
1.The memory address is also in 2^n so it will be more easy to move a page.
2.There are two bits which represent the page:
a.Page no. b.Offset no
So,This is also the reason.
3.Memory is in quantised form like charge(q).