I have just started doing some binary number exercices to prepare for a class that i will start next month and i got the hang of all the conversion from decimal to binary and viceverca But now with the two letters 'a ' ' b' in this exercise i am not sure how can i apply that knowledge to add the bits with the following exercise
Given two Binary numbers a = (a7a6 ... a0) and b = (b7b6 ... b0).There is a clculator that can add 4-bit binary numbers.How many bits will be used to represent the result of a 4-bit addition? Why?
We would like to use our calculator to calculate a + b. For this we can put as many as eight bits (4 bits of the first and 4 bits of the second number) of our choice in the calculator and continue to use the result bit by bit
How many additions does our calculator have to carry out for the addition of a and b at most? How many bits is the result maximum long?
How many additions does the calculator have to perform at least For the result to be correct for all possible inputs a and b?
The number of bits needed to represent a 4-bit binary addition is 5. This is because there could be a carry-over bit that pushes the result to 5 bits.
For example 1111 + 0010 = 10010.
This can be done the same way as adding decimal numbers. From right to left just add the numbers of the same significance. If the two bits are 1+1, the result is 10 so that place becomes a zero and the 1 carries over to the next pair of bits, just like decimal addition.
With regard to the min/max number of step, these seems more like an algorithm specific question. Look up some different binary addition algorithms, like ripple-carry for instance, and it should give you a better idea of what is meant by the question.
Related
Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway. Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction. For arithmetic & logic, it can do add, not, and. That's pretty much it! But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct. Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction. These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string. It takes a string length count in R1 as parameter supplied by caller (not shown). It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works. The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach. That yields a number between 0 and 9, which is used as an index into the first table Lookup10. The value obtained from the table at that index position is basically the index × 10. So this table is a × 10 table. The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer. It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow). The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping. Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions. Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2 (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi. You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit . That might be done by repetitive addition. Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.
To get a binary representation from a natural number like 20, we divide this number by 2 and so on until we cannot divide by 2 anymore. To get a binary representation from a decimal number like 0.4512, we multiply this number by 2 repeated times.
What is the logic explanation why with these two systems we get a binary representation?
Thanks
It is based on the fact that numbers are coded in binary.
If the number A is an integer, A is rewritten as A=∑i=0n-1ai×2i=an-1×2n-1+an-2×2n-2+...+a1×2+a0
where ai=0 or 1.
It is easy to see that is A is even, a0=0, and if it is odd, a0=1. So we already have the least significant bit a0.
Now, if we divide A by two, a0 disappears and we have
A/2=an-1×2n-2+an-2×2n-3+...+a2×2+a1
We can determine this way a1 depending on the parity of A/2. and we continue, we get all the bits of A.
Fractional numbers are expressed according to negative powers of 2. If A=0.a-1a-2...a-n, A=a-1/2+a-2/4+...+a-n/2^n
If we multiply it by two, 2×A=a-1+a-2/2+...+a-n/2^n-1. If 2×A≥1, we must have a-1=1, otherwise a-1=0. And we can determine other bits is a similar way by successive multiplications by two.
Say I have a 4 bit ALU, I have a carry flag, overflow flag, and a sign flag(MSB). How would I go about subtracting for example, two signed 8 bit numbers? I take the lower nibble of both numbers and subtract them right, but I don't understand how to know if there needs to be a 5th bit, and carry that over to the LSB of the high nibble of the number, and if so, how to add it considering I am doing this in 2's complement so I already have Carryin being used.. Any help would be appreciated.
This has been asked and answered here many times.
You know the rule about twos complement yes? Invert and add one. Also from grade school
a + b = a + (-b).
We dont have subtract hardware we have add hardware. What you do is a + (-b). Also from grade school we learned about carrying, 9+3 = 2 carry the 1. And from the second column on we have either two or three operands that are added together (a + (-b) + c, c being the carry in). If you think about it we can have a carry in on every column, sometimes it is zero. That is how the hardware works, each column is three in two out, carry in[n], a[n], b[n] the output is result[n] and carry out[n]. and as we know from grade school the carry out of this column is the carry in of the next column. So for a normal add the carry in of the least significant bit is always a zero, but for subtract we want to invert and add one so what we do is invert b and change the carry in of that first bit to a 1 which is the same as
a + (~b) + 1 which equals a + (-b) which equals a - b.
As far as addition and subtract hardware is concerned there is no such thing as signed or unsigned add or subtract. There does exist an unsigned overflow (carry out of the msbit) and a signed overflow (true if carry in and carry out of the msbit are not the same, false if they match).
This works for any number of bits, for example if you have 8 bit hardware but want to do math on 256 bit numbers, just do them 8 bits at a time and apply the carry out to the next 8 bits (add with carry or subtract with borrow instruction). Visualize the single columns one at a time, 4 bits is just four of those columns, 8, 9 bits 37 bits, etc. You can easily take any of those larger numbers draw a vertical line anywhere separating it into two operations all you have to do is what you do for single columns the carry out of the msbit of the thing on the right becomes the carry in of the lsbit of the thing on the left of the dividing line. Apply this to 8 bit math with 4 bit hardware...
So a subtract is an add with a carry in of 1 and the second operand inverted. Now some hardware inverts the carry out (unsigned overflow) on a subtract so that it becomes 1 for borrow and 0 for not borrow (unsigned borrow/overflow). Some dont. So you have to know how this works if you dont have a subtract with borrow instruction. If you have a subtract with borrow it doesnt matter if they invert carry out they will generically invert carry in (on a subtract). If they dont then again they wont on a subtract with borrow. but if you have to use an add with carry to simulate a subtract with borrow you need to possibly not only invert the second operand but invert the carry bit. If you dont have an add with carry then you have to simulate that as well by simply adding 1 or not.
I attempted to multiply binary 1111 as first input and 1111 as second input. When I multiply as usual I came across having to do the addition below I encounter having to carry the 1 with the three 1's which would mean 4 in binary with 2 bits. But that's impossible to represent 4 in 2 bits for this multiplication problem.
If you want to add multiple binary values, then you just carry whatever is left over after adding a column, regardless of how many bits you need to represent the carry.
It's just like doing the decimal add 99+99+99+99+99+99+99+99+99+99+99+99, when adding the least significant column, you end up with 108, so you carry 10 eventhough it's too large to fit in a single digit.
Likewise, if you add the binary 11+11+11+11+11 you end up with 101 when adding the least significant column, so you carry 10.
However, normally you only add two binary numbers at a time, as that lets you get away with using a single bit for carry.
What you have to do is carry the numbers over another digit.
Take the scenario:
11
+11
+11
you would have 1001 as your answer because 4 in binary is 100. Simply carry over the 1s into the correct place.
I'm watching some great lectures from David Malan (here) that is going over binary. He talked about signed/unsigned, 1's compliment, and 2's complement representations. There was an addition done of 4 + (-3) which lined up like this:
0100
1101 (flip 0011 to 1100, then add "1" to the end)
----
0001
But he waved his magical hands and threw away the last carry. I did some wikipedia research bit didn't quite get it, can someone explain to me why that particular carry (in the 8's ->16's columns) was dropped, but he kept the one just prior to it?
Thanks!
The last carry was dropped because it does not fit in the target space. It would be the fifth bit.
If he had carried out the same addition, but with for example 8 bit storage, it would have looked like this:
00000100
11111101
--------
00000001
In this situation we would also be stuck with an "unused" carry.
We have to treat carries this way to make addition with two's compliment work properly, but that's all good, because this is the easiest way of treating carries when you have limited storage. Anyway, we get the correct result, right :)
x86-processors store such an additional carry in the carry flag (CF), which is possible to test with certain instructions.
A carry is not the same as an overflow
In the example you do have a carry out of the MSB. By definition, this carry ends up on the floor. (If there was someplace for it to go, then it would not have been out of the MSB.)
But adding two numbers with different signs cannot overflow. An overflow can only happen when two numbers with the same sign produce a result with a different sign.
If you extend the left-hand side by adding more digit positions, you'll see that the carry rolls over into an infinite number of bit positions towards the left, so you never really get a final carry of 1. So the answer is positive.
...000100
+...111101
----------
....000001
At some point you have to set the number of bits to represent the numbers. He chose 4 bits. Any carry into the 5th bit is lost. But that's OK because he decided to represent the number in just 4 bits.
If he decided to use 5 bits to represent the numbers he would have gotten the same result.
That's the beauty of it... Your result will be the same size as the terms you are adding. So the fifth bit is thrown out
In 2's complement you use the carry bit to signal if there was an overflow in the last operation.
You must look at the LAST two carry bits to see if there was overflow. In your example, the last two carry bits were 11 meaning that there was no overflow.
If the last two carry bits are 11 or 00 then no overflow occurred. If the last two carry bits are 10 or 01 then there was overflow. That is why he sometimes cared about the carry bit and other times he ignored it.
The first row below is the carry row. The left-most bits in this row are used to determine if there was overflow.
1100
0100
1101
----
0001
Looks like you're only using 4 bits, so there is no 16's column.
If you were using more than 4 bits then the -3 representation would be different, and the carry of the math would still be thrown out the end. For example, with 6 bits you'd have:
000100
111101
------
1000001
and since the carry is outside the bit range of your representation it's gone, and you only have 000001
Consider 25 + 15:
5+5 = 10, we keep the 0 and let the 1 go to the tens-column. Then it's 2 + 1 (+ 1) = 4. Hence the result is 40 :)
It's the same thing with binaries. 0 + 1 = 1, 0 + 0 = 0, 1 + 1 = 10 => send the 1 the 8-column, 0 + 1 ( + 1 ) = 10 => send the 1 to the next column - Here's the overflow and why we just throw the 1 away.
This is why 2's complement is so great. It allows you to add / substract just like you do with base-10, because you (ab)use the fact that the sign-bit is the MSB, which will cascade operations all the way to overflows, when nessecary.
Hope I made myself understood. Quite hard to explan this when english is not you native tongue :)
When performing 2's complement addition, the only time that a carry indicates a problem is when there's an overflow condition - that can't happen if the 2 operands have a different sign.
If they have the same sign, then the overflow condition is when the sign bit changes from the 2 operands, ie., there's a carry into the most significant bit.
If I remember my computer architecture learnin' this is often detected at the hardware level by a flag that's set when the carry into the most significant bit is different than the carry out of the most significant bit. Which is not the case in your example (there's a carry into the msb as well as out of the msb).
One simple way to think of it is as "the sign not changing". If the carry into the msb is different than the carry out, then the sign has improperly changed.
The carry was dropped because there wasn't anything that could be done with it. If it's important to the result, it means that the operation overflowed the range of values that could be stored in the result. In assembler, there's usually an instruction that can test for the carry beyond the end of the result, and you can explicitly deal with it there - for example, carrying it into the next higher part of a multiple precision value.
Because you are talking about 4 bit representations. It's unussual compared to an actual machine, but if we were to take for granted that a computer has 4 bits in each byte for a moment, then we have the following properties: a byte wraps at 15 to -15. Anything outside that range cannot be stored. Besides, what would you do with an extra 5th bit beyond the sign bit anyway?
Now, given that, we can see from everyday math that 4 + (-3) = 1, which is exactly what you got.