Turing machine for addition and comparison of binary numbers - binary

Good Day everyone!
I am trying to solve this Exercise for learning purpose. Can someone guide me in solving these 3 questions?
Like I tried the 1st question for addition of 2 binary numbers separated by '+'. where I tried 2 numbers addition by representing each number with respective number of 1's or zeros e.g 5 = 1 1 1 1 1 or 0 0 0 0 0 and then add them and the result will also be in the same format as represented but how to add or represent 2 binaries and separating them by +, not getting any clue. Will be head of Turing machine move from left and reach plus sign and then move left and right of + sign? But how will the addition be performed. As far as my little knowledge is concerned TM can not simply add binaries we have to make some logic to represent its binaries like in the case of simple addition of 2 numbers. Similar is the case with comparison of 2 binaries?
Regards

The following program, inspired by the edX / MITx course Paradox and Infinity, shows how to perform binary addition with a Turing machine, where the numbers to be added are input to the Turing machine and are separated by a blank.
The Turing Machine
uses the second number as a counter
decrements the second number by one
increments the first number by one
till the second number becomes 0.
The following animation of the simulation of the Turing machine shows how 13 (binary 1101) and 5 (binary 101) are added to yield 18 (binary 10010).

I'll start with problems 2 and 3 since they are actually easier than problem 1.
We'll assume we have valid input (non-empty binary strings on both sides with no leading zeroes), so we don't need to do any input validation. To check whether the numbers are equal, we can simply bounce back and forth across the = symbol and cross off one digit at a time. If we find a mismatch at any point, we reject. If we have a digit remaining on the left and can't find one on the right, we reject. If we run out of digits on the left and still have some on the right, we reject. Otherwise, we accept.
Q T Q' T' D
q0 0 q1 X right // read the next (or first) symbol
q0 1 q2 X right // of the first binary number, or
q0 = q7 = right // recognize no next is available
q1 0 q1 0 right // skip ahead to the = symbol while
q1 1 q1 1 right // using state to remember which
q1 = q3 = right // symbol we need to look for
q2 0 q2 0 right
q2 1 q2 1 right
q2 = q4 = right
q3 X q3 X right // skip any crossed-out symbols
q3 0 q5 X left // in the second binary number
q3 1,b rej 1 left // then, make sure the next
q4 X q4 X,b right // available digit exists and
q4 0,b rej 0,b left // matches the one remembered
q4 1 q5 X left // otherwise, reject
q5 X q5 X left // find the = while ignoring
q5 = q6 = left // any crossed-out symbols
q6 0 q6 0 left // find the last crossed-out
q6 1 q6 1 left // symbol in the first binary
q6 X q0 X right // number, then move right
// and start over
q7 X q7 X right // we ran out of symbols
q7 b acc b left // in the first binary number,
q7 0,1 rej 0,1 left // make sure we already ran out
// in the second as well
This TM could first sanitize input by ensuring both binary strings are non-empty and contain no leading zeroes (crossing off any it finds).
Do to "greater than", you could easily do the following:
check to see if the length of the first binary number (after removing leading zeroes) is greater than, equal to, or less than the length of the second binary number (after removing leading zeroes). If the first one is longer than the second, accept. If the first one is shorter than the second, reject. Otherwise, continue to step 2.
check for equality as in the other problem, but accept if at any point you have a 1 in the first number and find a 0 in the second. This works because we know there are no leading zeroes, the numbers have the same number of digits, and we are checking digits in descending order of significance. Reject if you find the other mismatch or if you determine the numbers are equal.
To add numbers, the problem says to increment and decrement, but I feel like just adding with carry is going to be not significantly harder. An outline of the procedure is this:
Begin with carry = 0.
Go to least significant digit of first number. Go to state (dig=X, carry=0)
Go to least significant digit of second number. Go to state (sum=(X+Y+carry)%2, carry=(X+Y+carry)/2)
Go after the second number and write down the sum digit.
Go back and continue the process until one of the numbers runs out of digits.
Then, continue with whatever number still has digits, adding just those digits and the carry.
Finally, erase the original input and copy the sum backwards to the beginning of the tape.
An example of the distinct steps the tape might go through:
#1011+101#
#101X+101#
#101X+10X#
#101X+10X=#
#101X+10X=0#
#10XX+10X=0#
#10XX+1XX=0#
#10XX+1XX=00#
#1XXX+1XX=00#
#1XXX+XXX=00#
#1XXX+XXX=000#
#XXXX+XXX=000#
#XXXX+XXX=0000#
#XXXX+XXX=00001#
#XXXX+XXX=0000#
#1XXX+XXX=0000#
#1XXX+XXX=000#
#10XX+XXX=000#
#10XX+XXX=00#
#100X+XXX=00#
#100X+XXX=0#
#1000+XXX=0#
#1000+XXX=#
#10000XXX=#
#10000XXX#
#10000XX#
#10000X#
#10000#

There are two ways to solve the addition problem. Assume your input tape is in the form ^a+b$, where ^ and $ are symbols telling you you've reached the front and back of the input.
You can increment b and decrement a by 1 each step until a is 0, at which point b will be your answer. This is assuming you're comfortable writing a TM that can increment and decrement.
You can implement a full adding TM, using carries as you would if you were adding binary numbers on paper.
For either option, you need code to find the least significant bit of both a and b. The problem specifies that the most significant bit is first, so you'll want to start at + for a and $ for b.
For example, let's say we want to increment 1011$. The algorithm we'll use is find the least significant unmarked digit. If it's a 0, replace it with a 1. If it's a 1, move left.
Start by finding $, moving the read head there. Move the read head to the left.
You see a 1. Move the read head to the left.
You see a 1. Move the read head to the left.
You see a 0. write 1.
Return the read head to $. The binary number is now 1111$.
To compare two numbers, you need to keep track of which values you've already looked at. This is done by extending the alphabet with "marked" characters. 0 could be marked as X, 1 as Y, for example. X means "there's a 0 here, but I've seen it already.
So, for equality, we can start at ^ for a and = for b. (Assuming the input looks like ^a=b$.) The algorithm is to find the start of a and b, comparing the first unmarked bit of each. The first time you get to a different value, halt and reject. If you get to = and $, halt and reject.
Let's look at input ^11=10$:
Read head starts at ^.
Move the head right until we find an unmarked bit.
Read a 1. Write Y. Tape reads ^Y1=10$. We're in a state that represents having read a 1.
Move the head right until we find =.
Move the head right until we find an unmarked bit.
Read a 1. This matches the bit we read before. Write a Y.
Move the head left until we find ^.
Go to step 2.
This time, we'll read a 1 in a and read the 0 in b. We'll halt and reject.
Hope this helps to get you started.

Related

Theory behind multiplying two numbers without operands

I have been reading a Elements of Programming Interview and am struggling to understand the passage below:
"The algorithm taught in grade-school for decimal multiplication does
not use repeated addition- it uses shift and add to achieve a much
better time complexity. We can do the same with binary numbers- to
multiply x and y we initialize the result to 0 and iterate through the
bits of x, adding (2^k)y to the result if the kth bit of x is 1.
The value (2^k)y can be computed by left-shifting y by k. Since we
cannot use add directly, we must implement it. We can apply the
grade-school algorithm for addition to the binary case, i.e, compute
the sum bit-by-bit and "rippling" the carry along.
As an example, we show how to multiply 13 = (1101) and 9 = (1001)
using the algorithm described above. In the first iteration, since
the LSB of 13 is 1, we set the result to (1001). The second bit of
(1101) is 0, so we move on the third bit. The bit is 1, so we shift
(1001) to the left by 2 to obtain (1001001), which we add to (1001) to
get (101101). The forth and final bit of (1101) is 1, so we shift
(1001) to the left by 3 to obtain (1001000), which we add to (101101)
to get (1110101) = 117.
My Questions are:
What is the overall idea behind this, how is it a "bit-by-bit" addition
where does (2^k)y come from
what does it mean by "left-shifting y by k"
In the example, why do we set result to (1001) just because the LSB of 13 is 1?
The algorithm relies on the way numbers are coded in binary.
Let A be an unsigned number. A is coded by a set of bits an-1an-2...a0 in such a way that A=∑i=0n-1ai×2i
Now, assume you have two numbers A and B coded in binary and you wand to compute A×B
B×A=B×∑i=0n-1ai×2i
=∑i=0n-1B×ai×2i
ai is equal to 0 or 1. If ai=0, the sum will not be modified. If ai=1, we need to add B×ai
So, we can simply deduce the multiplication algorithm
result=0
for i in 0 to n-1
if a[i]=1 // assumes a[i] is the ith bit
result = result + B * 2^i
end
end
What is the overall idea behind this, how is it a "bit-by-bit" addition
It is just an application of the previous method where you process successively every bit of the multiplicator
where does (2^k)y come from
As mentioned above from the way binary numbers are coded. If ith bit is set, then there is a 2i in the decomposition of the number.
what does it mean by "left-shifting y by k"
Left shift means "pushing" the bits leftwards and filling the "holes" with zeroes. Hence if number is 1101 and it is left shifted by three, it becomes 1101000.
This is the way to multiply the number by 2i (just as when "left shifting" by 2 a decimal number and putting zeroes at the right places is the way to multiply by 100=102)
In the example, why do we set result to (1001) just because the LSB of 13 is 1?
Because there is a 1 at right most position, that corresponds to 20. So we left shift by 0 and add it to the result that is initialized to 0.

Turing Machines Basic Operations

In this problem, you are expected to construct several Turing machines. For each Turing machine,
provide a high-level description how it works and provide the graph representation. (You can leave
out the formal definition if the graph is complete.)
a) Write a Turing machine
T
inc
that can add 1 to a binary encoded number stored on the tape
of the Turing machine. The binary number is enclosed by the symbol
$
and you can assume
that the binary number starts with a 0 (i.e., there is no overflow to consider). For example, the
input
$0100$
is transformed by
T
inc
into
$0101$
and the input
$0111$
is transformed by
T
inc
into
$1000$
. The turing machine starts with the head located at the
$
sign left of the number.
b) Write a Turing machine
T
dec
that can subtract 1 from a binary encoded number stored on the
tape of the Turing machine. The binary number is enclosed by the symbol
$
. For example, the
input
$0100$
is transformed by
T
dec
into
$0011$
and the input
$0111$
is transformed by
T
dec
into
$0110$
. The turing machine starts with the head located at the
$
sign left of the number.
Hint: You can invert all bits of the number, then add one to the number, then invert all bits of
the number again.
c) Write a Turing machine
T
add
that can add two binary encoded numbers on the tape of the
Turing machine. The binary numbers are each enclosed by the symbol
$
and you can assume
that the binary numbers have a sufficient number of leading 0s to hold the sum (i.e., there
is no overflow to consider). For example, the input
$0100$0010$
is transformed by
T
add
into
$0000$0110$
, i.e., the first number was added to the second number.
Hint: You can construct
T
add
out of
T
inc
and
T
dec
: While the first number is not zero, decrement
the first number and increment the second number. Please name your states such that it is
clear to which part of your textual description they belong.
You may find it useful to write a Haskell program to simulate your Turing machines (i.e., following
the example shown in class). This way you can run tests against your Turing machine to verify it
is working correctly. Feel free to submit your Haskell code so that we can verify your design of the
turing machines.
Note that it is your responsibility to document things properly. If you hand in something we cannot
understand, you will likely get zero points.
(a) To add 1 to a number in binary, you start with the least significant digit (looks like this is the last one on your tape), add 1 with carry, move left, and repeat with the carry from the previous step until the carry is 0. It is guaranteed the carry will be 0 at least once since the problem assumes you start with a 0 on the front of the tape. A TM might look like this:
Q T Q' T' D
-----------------------
// read the leading $
// reject if not there
q0 $ q1 $ right
q0 0 hR 0 same
q0 1 hR 1 same
// go to the last digit
q1 $ q2 $ left
q1 0 q1 0 right
q1 1 q1 1 right
// add 1 by swapping digit values
// reject if overflow
// accept if carry becomes 0
q2 $ hR $ same
q2 0 hA 1 same
q2 1 q2 0 left
b) To subtract 1 from a number in binary, you start with the least significant digit, subtract 1 with borrow, move left, and repeat with the borrow from the previous step until the borrow is 0. This is either a trick question or it was ill-posed: given a number in this encoding, even with the assumption from part (a), it is not guaranteed that it is possible to subtract one: when presented with the encoding of zero, the required behavior of the TM is undefined. Assuming the number is greater than zero:
Q T Q' T' D
-----------------------
// read the leading $
// reject if not there
q0 $ q1 $ right
q0 0 hR 0 same
q0 1 hR 1 same
// go to the last digit
q1 $ q2 $ left
q1 0 q1 0 right
q1 1 q1 1 right
// subtract 1 by swapping digit values
// reject if underflow
// accept if borrow becomes 0
q2 $ hR $ same
q2 0 q2 1 same
q2 1 hA 0 left
c) Forget the hint and just implement binary addition directly. It is not stated but heavily implied that the numbers have the same number of digits, or at least that the second number has more digits than the first. Add the least significant digits of both numbers, with carry, marking digits of the first and second numbers as you go, and finish when you run out of digits. Then, unmark the digits. Here's how your TM should process the example:
$0100$0010$
$0100C0010$
$010AC0010$
$010AC001A$
$01AAC001A$
$01AAC00BA$
$0BAAC00BA$
$0BAAC0BBA$
$ABAAC0BBA$
$ABAACABBA$
$0BAACABBA$
$00AACABBA$
$000ACABBA$
$0000CABBA$
$0000$ABBA$
$0000$0BBA$
$0000$01BA$
$0000$011A$
$0000$0110$

One s complement and two´s complement logic behind [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
for my computer science class I need to finish a project where I desperately need to understand logic of one´s complement and two´s complement. I already know how to construct these and also how hardware adder works when dealing with two´s complement. The thing that is bothering me and I need help with is the logic behind one´s complement addition. Why do we have to add the bit we would carry over (and discard when using two complement coding) to the sum to get the correct result? I don´t understand why binary addition behaves like this when adding one´s complement and why is that last adding of carry over bit so crucial. I need to understand the logic behind it. Thanks
A 1's complement number (say,16 bits) can be described as below - in terms of the relationship between the binary code c (which is 0..0xFFFF, or 0..65535) and the 'value' x which the code represents, x is in range -32767.. 32767.
if the value x is 0..32767, the code c is numerically equal to x, and thus has a zero in the MSB
if the value x is negative, -1 .. -32767, then c is 65535 - x, and the MSB of c is 1.
The code c=65535 = 0xFFFF is to be interpreted as zero; since it has a 1 in the MSB I will call it '-0'.
Now consider what happens to the x values when you add two codes using a a binary adder:
If both x1, x2 are positive, the adder adds the codes: c = c1 + c2. both MSBs are zero at the input, so there will be no carry. So the value of c will be the sum x1+ x2 and this is how it will be interpreted (assuming the sum doesn't overflow into the MSB).
If both x1, x2 are negative, then by adding c1+c2 you are adding (65535+x1)+(65535+x2). There will always be a carry-out; by discarding this you end up with a binary value equal to 65534+(x1+x2). To get the code we want to represent this negative sum, i.e. 65535+(x1+x2), we need to add 1 more.
If the signs are different, the sum of codes is 65535+(x1+x2). There may or may not be a carry-out. Since the MSBs are different, the carry-out occurs if, and only if, the MSB of the sum is zero. In the case where the true sum (x1+x2) < 0, you won't have a carry-out, and the value 65535+(x1+x2) is the proper code for (x1+x2). If the sum (x1+x2)>0, there will be a carry-out; the sum of the codes (after discarding the carry out) will be (x1+x2)-1, and thus you have to add 1 to get the proper code (x1+x2).
So, looking at all the cases, it works out that whenever a carry-out occurs, (and you effectively subtract 65536 by discarding it) you need to add 1 to get the proper code to represent the sum.
When x1+x2 = 0 -- with one <0 and the other >0 - the sum of codes will always be 65535, which is left as is and is to be interpreted as zero ("-0").
When both are zero, you have three cases:
0 + 0 = 0 Fairly simple...
-0 + -0: codes are 0xFFFF + 0xFFFF = (carry+ 0xFFFE) = 0xFFFF after adding carry back in, interpreted as -0.
-0+ 0: codes are 0xFFFF + 0 = 0xFFFF (-0)
So the only case where the 1's complement sum is a 'proper' 0 is when both inputs are proper 0. All other cases adding up to zero give a '-0'.
Mathematically, you could say that 1's complement uses (c = x) mod 65535, with c constrained to 0...0xFFFF. And so the addition needs to be done modulo 65535. Each time you have a carryout, you subtract 65536 (by discarding it from the top) and add 1 (by adding it at the bottom); since you subtract 65535 in the process, the modulo 65535 value is preserved.

Wrapping my head around hardware representations of numbers: a hypothetical two's complement question

This is a super naive question (I know), but I think that it will make for a good jumping off point into considering how the basic instruction set of a CPU actually gets carried out:
In a two's complement system, you cannot invert the sign of the most negative number that your implementation can represent. The theoretical reason for this is obvious in that the negation of the most negative number would be out of the range of the implementation (the range is always something like
-128 to 127).
However, what actually happens when you try to carry out the negation operation on the most negative number is pretty strange. For example, in an 8 bit representation, the most negative number is -128, or 1000 0000 in binary. Normally, to negate a number you would flip all the bits and then add one. However, if you try to do this with -128 you end up with:
1000 0000 ->
0111 1111 ->
1000 0000
the same number that you started out with. For this reason, wikipedia calls it "the weird number".
In that same wikipedia article, it says that the above negation
is detected as an overflow condition since there was a carry into but not out of the most-significant bit.
So my question is this:
A) What the heck does that mean? and
B) It seems like the CPU would need to perform an extra error checking step each and every time it carried out a basic arithmetic operation in order to avoid accidents relating to this negation, creating significant overhead. If that is the case, why not just truncate the range of numbers that can be represented to leave the weird number out (i.e. -127 to 127 for 8 bits)? If that isn't the case, how can you implement such error checking without creating extra overhead?
The carry-out bit from the MSB is used as a flag to indicate that we
need more bits. Without it, we would have a system of modular
arithmetic1 without any way of detecting when we wrap around.
In modular arithmetic, you don’t deal with numbers but with
equivalence classes of numbers that have the same remainder. In such
a system, after adding 1 to 127, you would get −128, and you would
conclude that +128 and −128 belong to the same equivalence class.
If you restricted yourself to numbers in the range −127 to +127, you
would have to redefine addition, since 127 + 1 = −127 is nonsense.
Two’s-complement arithmetic, when presented to you by a computer, is
essentially modular arithmetic with the ability to detect an overflow.
This is what a 4-bit adder would look like when adding 0001 to
0111. You can see that in the MSB the carry-in and carry-out are
different:
0 0 0 1
| 0 | 1 | 1 | 1
| | | | | | | |
v v v v v v v v
0 <- ADD <-1- ADD <-1- ADD <-1- ADD <- 0
^ | ^ | | |
v v v v
1 0 0 0
It is this flag that the ALU uses to signal that an overflow occurred,
without any extra steps.
1. Modular arithmetic goes from 0 to 255 instead of −127 to 128, but the basic idea is the same.
It's not that the CPU does another check, its that the transistors are arranged to notice when this happens. And they are built that way because the engineers picked two-complement before they started designing the thing.
The result is that it happens during the same clock cycle as a non-overflowing result would be returned.
How does it work?
The "add 1" stage implements a cascade logic: starting with the LSB each bit is subjected in turn to the truth table
old-bit carry-in new-bit carry-out
-------------------------------------
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
(that is new-bit = old-bit xor carry-in and carry-out = old-bit and carry-in). The "carry-in" for the LSB is the 1 that we're adding, and for the rest of the bits it is the "carry-out" of the previous one (which is why this has to be done in a cascade).
The last of these circuits just adds a circuit for signed-overflow = (carry-in and not carry-out).
First off the wikipedia article states it cannot be negated from a negative signed number to a signed number. And what they mean is because it takes 9 bits to represent positive 128, which you cannot do with an 8 bit register. If you are going from negative signed to positive unsigned as a conversion, then you have enough bits. And the hardware should give you 0x80 when you negate 0x80 because that is the right answer.
For add, subtract, multiply, etc addition in twos complement is no different than decimal math from elementary school. You line up your binary numbers, add the columns, the result for that column is the least significant digit and the rest is carried over to the next column. So adding a 0b001 to 0b001 for example
1
001
001
===
010
Add the two ones in the rightmost column, the result is 0b10 (2 decimal), write zero then carry the one, one plus zero plus zero is one, nothing to carry, zero plus zero is zero, the result is 0b010.
The right most column where 1 plus 1 is 0b10 and we write 0 carry the one, well that carry the one is at the same time the carry out of the right most column and is the carry in of the second column. Also, with pencil and paper math we normally only talk about carry of something when it is non-zero but if you think about it you are always carrying a number like our second columns one plus zero is one carry the zero.
You can think of a twos complement negate as invert and add one, or walking the bits and inverting up to a point then not inverting, or taking the result of zero minus the number.
You can work subtract in binary using pencil and paper for what it is worth, makes your head hurt when borrowing compared to decimal, but works. For what you are asking though think of invert and add one.
It is easier to wrap your head around this if you take it down to even fewer bits than 8, three is a manageable number, it all scales from there.
So the first column below is the input, the second column is the inverted version and the third column is the second column plus one. The fourth column is the carry in to the msbit, the fifth column is the carry out of the msbit.
000 111 000 1 1
001 110 111 0 0
010 101 110 0 0
011 100 101 0 0
100 011 100 1 0
101 010 011 0 0
110 001 010 0 0
111 000 001 0 0
Real quick look at at adding a one to two bits:
00+1 = 001
01+1 = 010
10+1 = 011
11+1 = 100
For the case of adding one to a number, the only case where you carry out from the second bit into the third bit is when your bits are all ones, a single zero in there stops the cascading carry bits. So in the three bit inversion table above the only two cases where you have a carry into the msbit is 111 and 011 because those are the only two cases where those lower bits are all set. For the 111 case the msbit has a carry in and a carry out. for the 011 case the msbit has a carry in but not a carry out.
So as stated by someone else there are transistors wired up in the chip, if msbit carry in is set and msbit carry out is not set then set some flag somewhere, otherwise clear the flag.
So note that the three bit examples above scale. if after you invert and before you add one you have 0b01111111 then you are going to get a carry in without the carry out. If you have 0b11111111 then you get a carry in and a carry out. Note that zero is also a number where you get the same number back when you invert it, the difference is that when the bits are considered as signed, zero can be represented when negated, 1 with all zeros cannot.
The bottom line though is that this is not a crisis or end of the world thing there is a whole lot of math and other operations in the processor where carry bits and significant bits are falling off one side or the other and overflows and underflows are firing off, etc. Most of the time the programmers never check for such conditions and those bits just fall on the floor, sometimes causing the program to crash or sometimes the programmer has used 16 bit numbers for 8 bit math just to make sure nothing bad happens, or uses 8 bit numbers for 5 bit math for the same reason.
Note that the hardware doesnt know signed or unsigned for addition and subtraction. also the hardware doesnt know how to subtract. Hardware adders are three bit adders (two operands and carry in) with a result and carry out. Wire 8 of these up you have an 8 bit adder or subtractor, add without carry is the two operands wired directly with a zero wired in as the lsbit carry in. Add with carry is the two operands wired directly with the carry bit wired to the lsbit carry in. Subtract is add with the second operand inverted and a one on the carry in bit. At least from a high level perspective, that logic can all get optimized and implemented in ways often two hard to understand on casual inspection.
The really fun exercise is multiply, think about doing binary multiplication with pencil and paper, then realize it is much easier than decimal, because it is just a series of shifts and adds. given enough gates you can represent each result bit as a equation with the inputs to the equation being the operands. meaning you can do a single clock multiply if you wish, in the early days that was too many gates, so multi clock shift and adds were done, today we burn the gates and get single clock multiplies. Also note that understanding this also means that if you do say a 16 bit = 8 bit times 8 bit multiply, the lower 8 bit result is the same whether it is a signed multiply or unsigned. Since most folks do things like int = int * int; you really dont need a signed or unsigned multiply if all you care about is the result bits (no checking of flags, etc). fun stuff..
In the ARM Architecture Manual (DDI100E):
OverflowFrom
Returns 1 if the addition or subtraction specified as its parameter
caused a 32-bit signed overflow. [...]
Subtraction causes an overflow if the operands have different signs,
and the first operand and the result have different signs.
NEG
[...]
V Flag = OverflowFrom(0 - Rm)
NEG is the instruction for computing the negation of a number, i.e. the twos complement.
The V flag signals signed overflow and can be used for conditional branching. It's fairly standard across different processor architectures, together with the three other flags Z (zero), C (carry) and N (negative).
For 0 - (-128) = 0 + 128 = -128 the first operand is 0 and the second operand as well as the result is -128, so the condition for overflow is satisfied, and the V flag is set.

2's complement example, why not carry?

I'm watching some great lectures from David Malan (here) that is going over binary. He talked about signed/unsigned, 1's compliment, and 2's complement representations. There was an addition done of 4 + (-3) which lined up like this:
0100
1101 (flip 0011 to 1100, then add "1" to the end)
----
0001
But he waved his magical hands and threw away the last carry. I did some wikipedia research bit didn't quite get it, can someone explain to me why that particular carry (in the 8's ->16's columns) was dropped, but he kept the one just prior to it?
Thanks!
The last carry was dropped because it does not fit in the target space. It would be the fifth bit.
If he had carried out the same addition, but with for example 8 bit storage, it would have looked like this:
00000100
11111101
--------
00000001
In this situation we would also be stuck with an "unused" carry.
We have to treat carries this way to make addition with two's compliment work properly, but that's all good, because this is the easiest way of treating carries when you have limited storage. Anyway, we get the correct result, right :)
x86-processors store such an additional carry in the carry flag (CF), which is possible to test with certain instructions.
A carry is not the same as an overflow
In the example you do have a carry out of the MSB. By definition, this carry ends up on the floor. (If there was someplace for it to go, then it would not have been out of the MSB.)
But adding two numbers with different signs cannot overflow. An overflow can only happen when two numbers with the same sign produce a result with a different sign.
If you extend the left-hand side by adding more digit positions, you'll see that the carry rolls over into an infinite number of bit positions towards the left, so you never really get a final carry of 1. So the answer is positive.
...000100
+...111101
----------
....000001
At some point you have to set the number of bits to represent the numbers. He chose 4 bits. Any carry into the 5th bit is lost. But that's OK because he decided to represent the number in just 4 bits.
If he decided to use 5 bits to represent the numbers he would have gotten the same result.
That's the beauty of it... Your result will be the same size as the terms you are adding. So the fifth bit is thrown out
In 2's complement you use the carry bit to signal if there was an overflow in the last operation.
You must look at the LAST two carry bits to see if there was overflow. In your example, the last two carry bits were 11 meaning that there was no overflow.
If the last two carry bits are 11 or 00 then no overflow occurred. If the last two carry bits are 10 or 01 then there was overflow. That is why he sometimes cared about the carry bit and other times he ignored it.
The first row below is the carry row. The left-most bits in this row are used to determine if there was overflow.
1100
0100
1101
----
0001
Looks like you're only using 4 bits, so there is no 16's column.
If you were using more than 4 bits then the -3 representation would be different, and the carry of the math would still be thrown out the end. For example, with 6 bits you'd have:
000100
111101
------
1000001
and since the carry is outside the bit range of your representation it's gone, and you only have 000001
Consider 25 + 15:
5+5 = 10, we keep the 0 and let the 1 go to the tens-column. Then it's 2 + 1 (+ 1) = 4. Hence the result is 40 :)
It's the same thing with binaries. 0 + 1 = 1, 0 + 0 = 0, 1 + 1 = 10 => send the 1 the 8-column, 0 + 1 ( + 1 ) = 10 => send the 1 to the next column - Here's the overflow and why we just throw the 1 away.
This is why 2's complement is so great. It allows you to add / substract just like you do with base-10, because you (ab)use the fact that the sign-bit is the MSB, which will cascade operations all the way to overflows, when nessecary.
Hope I made myself understood. Quite hard to explan this when english is not you native tongue :)
When performing 2's complement addition, the only time that a carry indicates a problem is when there's an overflow condition - that can't happen if the 2 operands have a different sign.
If they have the same sign, then the overflow condition is when the sign bit changes from the 2 operands, ie., there's a carry into the most significant bit.
If I remember my computer architecture learnin' this is often detected at the hardware level by a flag that's set when the carry into the most significant bit is different than the carry out of the most significant bit. Which is not the case in your example (there's a carry into the msb as well as out of the msb).
One simple way to think of it is as "the sign not changing". If the carry into the msb is different than the carry out, then the sign has improperly changed.
The carry was dropped because there wasn't anything that could be done with it. If it's important to the result, it means that the operation overflowed the range of values that could be stored in the result. In assembler, there's usually an instruction that can test for the carry beyond the end of the result, and you can explicitly deal with it there - for example, carrying it into the next higher part of a multiple precision value.
Because you are talking about 4 bit representations. It's unussual compared to an actual machine, but if we were to take for granted that a computer has 4 bits in each byte for a moment, then we have the following properties: a byte wraps at 15 to -15. Anything outside that range cannot be stored. Besides, what would you do with an extra 5th bit beyond the sign bit anyway?
Now, given that, we can see from everyday math that 4 + (-3) = 1, which is exactly what you got.