How to convert 2+(2/7) to IEEE 754 floating point - binary

Can someone explain to me the steps to convert a number in decimal format (such as 2+(2/7)) into IEEE 754 Floating Point representation? Thanks!

First, 2 + 2/7 isn't in what most people would call "decimal format". "Decimal format" would more commonly be used to indicate a number like:
2.285714285714285714285714285714285714285714...
Even the ... is a little bit fast and loose. More commonly, the number would be truncated or rounded to some number of decimal digits:
2.2857142857142857
Of course, at this point, it is no longer exactly equal to 2 + 2/7, but is "close enough" for most uses.
We do something similar to convert a number to a IEEE-754 format; instead of base 10, we begin by writing the number in base 2:
10.010010010010010010010010010010010010010010010010010010010010...
Next we "normalize" the number, by writing it in the form 2^e * 1.xxx... for some exponent e (specifically, the digit position of the leading bit of our number):
2^1 * 1.0010010010010010010010010010010010010010010010010010010010010...
At this point, we have to choose a specific IEEE-754 format, because we need to know how many digits to keep around. Let's choose "single-precision", which has a 24-bit significand. We round the repeating binary number to 24 bits:
2^1 * 1.00100100100100100100100 10010010010010010010010010010010010010...
24 leading bits bits to be rounded away
Because the trailing bits to be rounded off are larger than 1000..., the number rounds up to:
2^1 * 1.00100100100100100100101
Now, how does this value actually get encoded in IEEE-754 format? The single-precision format has a leading signbit (zero, because the number is positive), followed by eight bits that contain the value 127 + e in binary, followed by the fractional part of the significand:
0 10000000 00100100100100100100101
s exponent fraction of significand
In hexadecimal, this gives 0x40124925.

Related

Representing numbers with IEEE754 with Round to Nearest Even

I'm currently learning about IEEE754 standard and rounding, and I have an exercise which is the following:
Add -325.875 to 0.546875 in IEEE754, but with 3 bits dedicated to the mantissa instead of 23.
I'm having a lot of trouble doing this, especially representing the intermediary values, and the guard/round/sticky bits. Can someone give me a step-by-step solution, to the problem?
My biggest problem is that obviously I can't represent 0.546875 as 0.100011 as that would have more precision than the system has. So how would that be represented?
Apologies if the wording is confusing.
Preliminaries
The preferred term for the fraction portion of a floating-point number is “significand,” not “mantissa.” “Mantissa” is an old word for the fraction portion of a logarithm. Mantissas are logarithmic; adding to the mantissa multiplies the number represented. Significands are linear; adding to the significand adds to the number represented (as scaled by the exponent).
When working with a significand, use its mathematical precision, not the number of bits in the storage format. The IEEE-754 binary32 format has 23 bits in its primary field for the encoding of a significand, but another bit is encoded via the exponent field. Mathematically, numbers in the binary32 format behave as if they have 24 bits in their significands.
So, the task is to work with numbers with four bits in their significands, not three.
Work
In binary, −325.875 is −101000101.1112•2. In scientific notation, that is −1.010001011112•28. Rounding it to four bits in the significand gives −1.0102•28.
In binary, 0.546875 is .1000112. In scientific notation, that is 1.000112•2−1. Rounding it to four bits in the significand gives 1.0012•2−1. Note that the first four bits are 1000, but they are immediately followed by 11, so we round up. 1.00011 is closer to 1.001 than it is to 1.000.
So, in a floating-point format with four-bit significands, we want to add −1.0102•28 and 1.0012•2−1. If we adjust the latter number to have the same exponent as the former, we have −1.0102•28 and 0.0000000010012•28. To add those, we note the signs are different, so we want to subtract the magnitudes. It may help to line up the digits as we were taught in elementary school:
1.010000000000
0.000000001001
——————————————
1.001111110111
Thus, the mathematical result would be −1.0011111101112•28. However, we need to round the significand to four bits. The first four bits are 1001, but they are followed by 11, so we round up, producing 1010. So the final result is −1.0102•28.
−1.0102•28 is −1.25•28 = −320.

JSON.parse big numbers result in incorrect numbers [duplicate]

See this code:
var jsonString = '{"id":714341252076979033,"type":"FUZZY"}';
var jsonParsed = JSON.parse(jsonString);
console.log(jsonString, jsonParsed);
When I see my console in Firefox 3.5, the value of jsonParsed is the number rounded:
Object id=714341252076979100 type=FUZZY
Tried different values, the same outcome (number rounded).
I also don't get its rounding rules. 714341252076979136 is rounded to 714341252076979200, whereas 714341252076979135 is rounded to 714341252076979100.
Why is this happening?
You're overflowing the capacity of JavaScript's number type, see §8.5 of the spec for details. Those IDs will need to be strings.
IEEE-754 double-precision floating point (the kind of number JavaScript uses) can't precisely represent all numbers (of course). Famously, 0.1 + 0.2 == 0.3 is false. That can affect whole numbers just like it affects fractional numbers; it starts once you get above 9,007,199,254,740,991 (Number.MAX_SAFE_INTEGER).
Beyond Number.MAX_SAFE_INTEGER + 1 (9007199254740992), the IEEE-754 floating-point format can no longer represent every consecutive integer. 9007199254740991 + 1 is 9007199254740992, but 9007199254740992 + 1 is also 9007199254740992 because 9007199254740993 cannot be represented in the format. The next that can be is 9007199254740994. Then 9007199254740995 can't be, but 9007199254740996 can.
The reason is we've run out of bits, so we no longer have a 1s bit; the lowest-order bit now represents multiples of 2. Eventually, if we keep going, we lose that bit and only work in multiples of 4. And so on.
Your values are well above that threshold, and so they get rounded to the nearest representable value.
As of ES2020, you can use BigInt for integers that are arbitrarily large, but there is no JSON representation for them. You could use strings and a reviver function:
const jsonString = '{"id":"714341252076979033","type":"FUZZY"}';
// Note it's a string −−−−^−−−−−−−−−−−−−−−−−−^
const obj = JSON.parse(jsonString, (key, value) => {
if (key === "id" && typeof value === "string" && value.match(/^\d+$/)) {
return BigInt(value);
}
return value;
});
console.log(obj);
(Look in the real console, the snippets console doesn't understand BigInt.)
If you're curious about the bits, here's what happens: An IEEE-754 binary double-precision floating-point number has a sign bit, 11 bits of exponent (which defines the overall scale of the number, as a power of 2 [because this is a binary format]), and 52 bits of significand (but the format is so clever it gets 53 bits of precision out of those 52 bits). How the exponent is used is complicated (described here), but in very vague terms, if we add one to the exponent, the value of the significand is doubled, since the exponent is used for powers of 2 (again, caveat there, it's not direct, there's cleverness in there).
So let's look at the value 9007199254740991 (aka, Number.MAX_SAFE_INTEGER):
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110011 1111111111111111111111111111111111111111111111111111
= 9007199254740991 (Number.MAX_SAFE_INTEGER)
That exponent value, 10000110011, means that every time we add one to the significand, the number represented goes up by 1 (the whole number 1, we lost the ability to represent fractional numbers much earlier).
But now that significand is full. To go past that number, we have to increase the exponent, which means that if we add one to the significand, the value of the number represented goes up by 2, not 1 (because the exponent is applied to 2, the base of this binary floating point number):
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110100 0000000000000000000000000000000000000000000000000000
= 9007199254740992 (Number.MAX_SAFE_INTEGER + 1)
Well, that's okay, because 9007199254740991 + 1 is 9007199254740992 anyway. But! We can't represent 9007199254740993. We've run out of bits. If we add just 1 to the significand, it adds 2 to the value:
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110100 0000000000000000000000000000000000000000000000000001
= 9007199254740994 (Number.MAX_SAFE_INTEGER + 3)
The format just cannot represent odd numbers anymore as we increase the value, the exponent is too big.
Eventually, we run out of significand bits again and have to increase the exponent, so we end up only being able to represent multiples of 4. Then multiples of 8. Then multiples of 16. And so on.
What you're seeing here is actually the effect of two roundings. Numbers in ECMAScript are internally represented double-precision floating-point. When id is set to 714341252076979033 (0x9e9d9958274c359 in hex), it actually is assigned the nearest representable double-precision value, which is 714341252076979072 (0x9e9d9958274c380). When you print out the value, it is being rounded to 15 significant decimal digits, which gives 14341252076979100.
It is not caused by this json parser. Just try to enter 714341252076979033 to fbug's console. You'll see the same 714341252076979100.
See this blog post for details:
http://www.exploringbinary.com/print-precision-of-floating-point-integers-varies-too
JavaScript uses double precision floating point values, ie a total precision of 53 bits, but you need
ceil(lb 714341252076979033) = 60
bits to exactly represent the value.
The nearest exactly representable number is 714341252076979072 (write the original number in binary, replace the last 7 digits with 0 and round up because the highest replaced digit was 1).
You'll get 714341252076979100 instead of this number because ToString() as described by ECMA-262, §9.8.1 works with powers of ten and in 53 bit precision all these numbers are equal.
The problem is that your number requires a greater precision than JavaScript has.
Can you send the number as a string? Separated in two parts?
JavaScript can only handle exact whole numbers up to about 9000 million million (that's 9 with 15 zeros). Higher than that and you get garbage. Work around this by using strings to hold the numbers. If you need to do math with these numbers, write your own functions or see if you can find a library for them: I suggest the former as I don't like the libraries I've seen. To get you started, see two of my functions at another answer.

Sign magnitude, One's complement, Two's Complement

So my professor has a question to make a list of all positive and negative numbers that can be represented in One's, two's complements, and sign magnitude:
Using 4 bit numbers, for example (5)10 = ( 0101)2
Write all positive numbers and all negative numbers that can be represented with four bits in sign-magnitude, one’s complement, and two’s complement.
Now, I am not looking for the answer just clarification.
for sign magnitude, the first bit represents the sign of the number.
so in the example provided, negative five is -5= (1101), The ones
complement = (0101) the twos complement (1010)
Sign magnitude only allows for three bits to show number and one for
the sign (the leading bit from right to left.) This would mean that
we only have 8 combinations. so that is numbers from 0-7 and -0-
(-6) Ones and twos we have 16? so 0-15 and -0-(-15)
can someone explain this question better?
Here's a brief description about all the three representation techniques you've mentioned.
Sign and Magnitude Representation
In this representation, we can represent numbers in any number of bits (powers of 2). There are two parts in the representation. Sign and Magnitude, as the name implies.
If we want to represent a number in n number of bits,
the first bit always represents the sign of the number. i.e. 0 for a positive number and 1 for a negative number.
the remaining bits (n-1) represent the magnitude of the number in Binary.
e.g. If you want to represent +25 and -25 using 8 bits:
(+25)10 = 0011001 and (-25)10 = 10011001
Complement
Since Binary number system has only 2 digits (0 and 1), the complement of one digit is the other. i.e. the complement of 0 is 1 and vice versa.
One's Complement
In this representation, there is no specific bit to represent the sign, but the MSB (Most Significant Bit) can be used to determine the sign of the number. i.e. MSB is 0 if the number is positive and 1 if the number is negative. Binary numbers are used and also a specific bit size is used. (e.g. 8, 16, 32, etc. bits).
If the number is positive
Convert the number to binary
Set the number to specific bit size
If the number is negative
Convert the number to binary
Set the number to specific bit size
Get the complement of that value
e.g. Take the previous example again
(+25)10
Convert the number to binary -> (11001)2
Set the number to specific bit size -> (0001 1001)
(-25)10
Convert the number to binary -> (11001)2
Set the number to specific bit size -> (0001 1001)
Get the complement of that value -> (1110 0110)
Two's Complement
This representation technique is very much similar to One's Complement Representation. The main difference is that when the number is negative, 1 is added to the LSB (Least Significant Bit) after getting the complement.
e.g. Let us take the same example
(+25)10
Convert the number to binary -> (11001)2
Set the number to specific bit size -> (0001 1001)
(-25)10
Convert the number to binary -> (11001)2
Set the number to specific bit size -> (0001 1001)
Get the complement of that value -> (1110 0110)
Add 1 to LSB -> (1110 0110) + 1 = (1110 0111)

Is the most significant decimal digits precision that can be converted to binary and back to decimal without loss of significance 6 or 7.225?

I've come across two different precision formulas for floating-point numbers.
⌊(N-1) log10(2)⌋ = 6 decimal digits (Single-precision)
and
N log10(2) ≈ 7.225 decimal digits (Single-precision)
Where N = 24 Significant bits (Single-precision)
The first formula is found at the top of page 4 of "IEEE Standard 754 for Binary Floating-Point Arithmetic" written by, Professor W. Kahan.
The second formula is found on the Wikipedia article "Single-precision floating-point format" under section IEEE 754 single-precision binary floating-point format: binary32.
For the first formula, Professor W. Kahan says
If a decimal string with at most 6 sig. dec. is converted to Single and then converted back to the same number of sig. dec.,
then the final string should match the original.
For the second formula, Wikipedia says
...the total precision is 24 bits (equivalent to log10(224) ≈ 7.225 decimal digits).
The results of both formulas (6 and 7.225 decimal digits) are different, and I expected them to be the same because I assumed they both were meant to represent the most significant decimal digits which can be converted to floating-point binary and then converted back to decimal with the same number of significant decimal digits that it started with.
Why do these two numbers differ, and what is the most significant decimal digits precision that can be converted to binary and back to decimal without loss of significance?
These are talking about two slightly different things.
The 7.2251 digits is the precision with which a number can be stored internally. For one example, if you did a computation with a double precision number (so you were starting with something like 15 digits of precision), then rounded it to a single precision number, the precision you'd have left at that point would be approximately 7 digits.
The 6 digits is talking about the precision that can be maintained through a round-trip conversion from a string of decimal digits, into a floating point number, then back to another string of decimal digits.
So, let's assume I start with a number like 1.23456789 as a string, then convert that to a float32, then convert the result back to a string. When I've done this, I can expect 6 digits to match exactly. The seventh digit might be rounded though, so I can't necessarily expect it to match (though it probably will be +/- 1 of the original string.
For example, consider the following code:
#include <iostream>
#include <iomanip>
int main() {
double init = 987.23456789;
for (int i = 0; i < 100; i++) {
float f = init + i / 100.0;
std::cout << std::setprecision(10) << std::setw(20) << f;
}
}
This produces a table like the following:
987.2345581 987.2445679 987.2545776 987.2645874
987.2745972 987.2845459 987.2945557 987.3045654
987.3145752 987.324585 987.3345947 987.3445435
987.3545532 987.364563 987.3745728 987.3845825
987.3945923 987.404541 987.4145508 987.4245605
987.4345703 987.4445801 987.4545898 987.4645386
987.4745483 987.4845581 987.4945679 987.5045776
987.5145874 987.5245972 987.5345459 987.5445557
987.5545654 987.5645752 987.574585 987.5845947
987.5945435 987.6045532 987.614563 987.6245728
987.6345825 987.6445923 987.654541 987.6645508
987.6745605 987.6845703 987.6945801 987.7045898
987.7145386 987.7245483 987.7345581 987.7445679
987.7545776 987.7645874 987.7745972 987.7845459
987.7945557 987.8045654 987.8145752 987.824585
987.8345947 987.8445435 987.8545532 987.864563
987.8745728 987.8845825 987.8945923 987.904541
987.9145508 987.9245605 987.9345703 987.9445801
987.9545898 987.9645386 987.9745483 987.9845581
987.9945679 988.0045776 988.0145874 988.0245972
988.0345459 988.0445557 988.0545654 988.0645752
988.074585 988.0845947 988.0945435 988.1045532
988.114563 988.1245728 988.1345825 988.1445923
988.154541 988.1645508 988.1745605 988.1845703
988.1945801 988.2045898 988.2145386 988.2245483
If we look through this, we can see that the first six significant digits always follow the pattern precisely (i.e., each result is exactly 0.01 greater than its predecessor). As we can see in the original double, the value is actually 98x.xx456--but when we convert the single-precision float to decimal, we can see that the 7th digit frequently would not be read back in correctly--since the subsequent digit is greater than 5, it should round up to 98x.xx46, but some of the values won't (e.g,. the second to last item in the first column is 988.154541, which would be round down instead of up, so we'd end up with 98x.xx45 instead of 46. So, even though the value (as stored) is precise to 7 digits (plus a little), by the time we round-trip the value through a conversion to decimal and back, we can't depend on that seventh digit matching precisely any more (even though there's enough precision that it will a lot more often than not).
1. That basically means 7 digits, and the 8th digit will be a little more accurate than nothing, but not a whole lot--for example, if we were converting from a double of 1.2345678, the .225 digits of precision mean that the last digit would be with about +/- .775 of the what started out there (whereas without the .225 digits of precision, it would be basically +/- 1 of what started out there).
what is the most significant decimal digits precision that can be
converted to binary and back to decimal without loss of significance?
The most significant decimal digits precision that can be converted to binary and back to decimal without loss of significance (for single-precision floating-point numbers or 24-bits) is 6 decimal digits.
Why do these two numbers differ...
The numbers 6 and 7.225 differ, because they define two different things. 6 is the most decimal digits that can be round-tripped. 7.225 is the approximate number of decimal digits precision for a 24-bit binary integer because a 24-bit binary integer can have 7 or 8 decimal digits depending on its specific value.
7.225 was found using the specific binary integer formula.
dspec = b·log10(2) (dspec
= specific decimal digits, b = bits)
However, what you normally need to know, are the minimum and maximum decimal digits for a b-bit integer. The following formulas are used to find the min and max decimal digits (7 and 8 respectively for 24-bits) of a specific binary integer.
dmin = ⌈(b-1)·log10(2)⌉ (dmin
= min decimal digits, b = bits, ⌈x⌉ = smallest integer ≥ x)
dmax = ⌈b·log10(2)⌉ (dmax
= max decimal digits, b = bits, ⌈x⌉ = smallest integer ≥ x)
To learn more about how these formulas are derived, read Number of Decimal Digits In a Binary Integer, written by Rick Regan.
This is all well and good, but you may ask, why is 6 the most decimal digits for a round-trip conversion if you say that the span of decimal digits for a 24-bit number is 7 to 8?
The answer is — because the above formulas only work for integers and not floating-point numbers!
Every decimal integer has an exact value in binary. However, the same cannot be said for every decimal floating-point number. Take .1 for example. .1 in binary is the number 0.000110011001100..., which is a repeating or recurring binary. This can produce rounding error.
Moreover, it takes one more bit to represent a decimal floating-point number than it does to represent a decimal integer of equal significance. This is because floating-point numbers are more precise the closer they are to 0, and less precise the further they are from 0. Because of this, many floating-point numbers near the minimum and maximum value ranges (emin = -126 and emax = +127 for single-precision) lose 1 bit of precision due to rounding error. To see this visually, look at What every computer programmer should know about floating point, part 1, written by Josh Haberman.
Furthermore, there are at least 784,757 positive seven-digit decimal numbers that cannot retain their original value after a round-trip conversion. An example of such a number that cannot survive the round-trip is 8.589973e9. This is the smallest positive number that does not retain its original value.
Here's the formula that you should be using for floating-point number precision that will give you 6 decimal digits for round-trip conversion.
dmax = ⌊(b-1)·log10(2)⌋ (dmax
= max decimal digits, b = bits, ⌊x⌋ = largest integer ≤ x)
To learn more about how this formula is derived, read Number of Digits Required For Round-Trip Conversions, also written by Rick Regan. Rick does an excellent job showing the formulas derivation with references to rigorous proofs.
As a result, you can utilize the above formulas in a constructive way; if you understand how they work, you can apply them to any programming language that uses floating-point data types. All you have to know is the number of significant bits that your floating-point data type has, and you can find their respective number of decimal digits that you can count on to have no loss of significance after a round-trip conversion.
June 18, 2017 Update: I want to include a link to Rick Regan's new article which goes into more detail and in my opinion better answers this question than any answer provided here. His article is "Decimal Precision of Binary Floating-Point Numbers" and can be found on his website www.exploringbinary.com.
Do keep in mind that they are the exact same formulas. Remember your high-school math book identity:
Log(x^y) == y * Log(x)
It helps to actually calculate the values for N = 24 with your calculator:
Kahan's: 23 * Log(2) = 6.924
Wikipedia's: Log(2^24) = 7.225
Kahan was forced to truncate 6.924 down to 6 digits because of floor(), bummer. The only actual difference is that Kahan used 1 less bit of precision.
Pretty hard to guess why, the professor might have relied on old notes. Written before IEEE-754 and not taking into account that the 24th bit of precision is for free. The format uses a trick, the most significant bit of a floating point value that isn't 0 is always 1. So it doesn't need to be stored. The processor adds it back before it performs a calculation. Turning 23 bits of stored precision into 24 of effective precision.
Or he took into account that the conversion from a decimal string to a binary floating point value itself generates an error. Many nice round decimal values, like 0.1, cannot be perfectly converted to binary. It has an endless number of digits, just like 1/3 in decimal. That however generates a result that is off by +/- 0.5 bits, achieved by simple rounding. So the result is accurate to 23.5 * Log(2) = 7.074 decimal digits. If he assumed that the conversion routine is clumsy and doesn't properly round then the result can be off by +/-1 bit and N-1 is appropriate. They are not clumsy.
Or he thought like a typical scientist or (heaven forbid) accountant and wants the result of a calculation converted back to decimal as well. Such as you'd get when you trivially look for a 7 digit decimal number whose conversion back-and-forth does not produce the same number. Yes, that adds another +/- 0.5 bit error, summing up to 1 bit error total.
But never, never make that mistake, you always have to include any errors you get from manipulating the number in a calculation. Some of them lose significant digits very quickly, subtraction in particular is very dangerous.

Why prefer two's complement over sign-and-magnitude for signed numbers?

I'm just curious if there's a reason why in order to represent -1 in binary, two's complement is used: flipping the bits and adding 1?
-1 is represented by 11111111 (two's complement) rather than (to me more intuitive) 10000001 which is binary 1 with first bit as negative flag.
Disclaimer: I don't rely on binary arithmetic for my job!
It's done so that addition doesn't need to have any special logic for dealing with negative numbers. Check out the article on Wikipedia.
Say you have two numbers, 2 and -1. In your "intuitive" way of representing numbers, they would be 0010 and 1001, respectively (I'm sticking to 4 bits for size). In the two's complement way, they are 0010 and 1111. Now, let's say I want to add them.
Two's complement addition is very simple. You add numbers normally and any carry bit at the end is discarded. So they're added as follows:
0010
+ 1111
=10001
= 0001 (discard the carry)
0001 is 1, which is the expected result of "2+(-1)".
But in your "intuitive" method, adding is more complicated:
0010
+ 1001
= 1011
Which is -3, right? Simple addition doesn't work in this case. You need to note that one of the numbers is negative and use a different algorithm if that's the case.
For this "intuitive" storage method, subtraction is a different operation than addition, requiring additional checks on the numbers before they can be added. Since you want the most basic operations (addition, subtraction, etc) to be as fast as possible, you need to store numbers in a way that lets you use the simplest algorithms possible.
Additionally, in the "intuitive" storage method, there are two zeroes:
0000 "zero"
1000 "negative zero"
Which are intuitively the same number but have two different values when stored. Every application will need to take extra steps to make sure that non-zero values are also not negative zero.
There's another bonus with storing ints this way, and that's when you need to extend the width of the register the value is being stored in. With two's complement, storing a 4-bit number in an 8-bit register is a matter of repeating its most significant bit:
0001 (one, in four bits)
00000001 (one, in eight bits)
1110 (negative two, in four bits)
11111110 (negative two, in eight bits)
It's just a matter of looking at the sign bit of the smaller word and repeating it until it pads the width of the bigger word.
With your method you would need to clear the existing bit, which is an extra operation in addition to padding:
0001 (one, in four bits)
00000001 (one, in eight bits)
1010 (negative two, in four bits)
10000010 (negative two, in eight bits)
You still need to set those extra 4 bits in both cases, but in the "intuitive" case you need to clear the 5th bit as well. It's one tiny extra step in one of the most fundamental and common operations present in every application.
Wikipedia says it all:
The two's-complement system has the advantage of not requiring that the addition and subtraction circuitry examine the signs of the operands to determine whether to add or subtract. This property makes the system both simpler to implement and capable of easily handling higher precision arithmetic. Also, zero has only a single representation, obviating the subtleties associated with negative zero, which exists in ones'-complement systems.
In other words, adding is the same, wether or not the number is negative.
Even though this question is old , let me put in my 2 cents.
Before I explain this ,lets get back to basics. 2' complement is 1's complement + 1 .
Now what is 1's complement and what is its significance in addition.
Sum of any n-bit number and its 1's complement gives you the highest possible number that can be represented by those n-bits.
Example:
0010 (2 in 4 bit system)
+1101 (1's complement of 2)
___________________________
1111 (the highest number that we can represent by 4 bits)
Now what will happen if we try to add 1 more to the result. It will results in an overflow.
The result will be 1 0000 which is 0 ( as we are working with 4 bit numbers , (the 1 on left is an overflow )
So ,
Any n-bit number + its 1's complement = max n-bit number
Any n-bit number + its 1'complement + 1 = 0 ( as explained above, overflow will occur as we are adding 1 to max n-bit number)
Someone then decided to call 1's complement + 1 as 2'complement. So the above statement becomes:
Any n'bit number + its 2's complement = 0
which means 2's complement of a number = - (of that number)
All this yields one more question , why can we use only the (n-1) of the n bits to represent positive number and why does the left most nth bit represent sign (0 on the leftmost bit means +ve number , and 1 means -ve number ) . eg why do we use only the first 31 bits of an int in java to represent positive number if the 32nd bit is 1 , its a -ve number.
1100 (lets assume 12 in 4 bit system)
+0100(2's complement of 12)
___________________________
1 0000 (result is zero , with the carry 1 overflowing)
Thus the system of (n + 2'complement of n) = 0 , still works. The only ambiguity here is 2's complement of 12 is 0100 which ambiguously also represents +8 , other than representing -12 in 2s complement system.
This problem will be solved if positive numbers always have a 0 in their left most bit. In that case their 2's complement will always have a 1 in their left most bit , and we wont have the ambiguity of the same set of bits representing a 2's complement number as well as a +ve number.
Two's complement allows addition and subtraction to be done in the normal way (like you wound for unsigned numbers). It also prevents -0 (a separate way to represent 0 that would not be equal to 0 with the normal bit-by-bit method of comparing numbers).
Two's complement allows negative and positive numbers to be added together without any special logic.
If you tried to add 1 and -1 using your method
10000001 (-1)
+00000001 (1)
you get
10000010 (-2)
Instead, by using two's complement, we can add
11111111 (-1)
+00000001 (1)
you get
00000000 (0)
The same is true for subtraction.
Also, if you try to subtract 4 from 6 (two positive numbers) you can 2's complement 4 and add the two together 6 + (-4) = 6 - 4 = 2
This means that subtraction and addition of both positive and negative numbers can all be done by the same circuit in the cpu.
this is to simplify sums and differences of numbers. a sum of a negative number and a positive one codified in 2's complements is the same as summing them up in the normal way.
The usual implementation of the operation is "flip the bits and add 1", but there's another way of defining it that probably makes the rationale clearer. 2's complement is the form you get if you take the usual unsigned representation where each bit controls the next power of 2, and just make the most significant term negative.
Taking an 8-bit value a7 a6 a5 a4 a3 a2 a1 a0
The usual unsigned binary interpretation is:
27*a7 + 26*a6 + 25*a5 + 24*a4 + 23*a3 + 22*a2 + 21*a1 + 20*a0
11111111 = 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255
The two's complement interpretation is:
-27*a7 + 26*a6 + 25*a5 + 24*a4 + 23*a3 + 22*a2 + 21*a1 + 20*a0
11111111 = -128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = -1
None of the other bits change meaning at all, and carrying into a7 is "overflow" and not expected to work, so pretty much all of the arithmetic operations work without modification (as others have noted). Sign-magnitude generally inspect the sign bit and use different logic.
To expand on others answers:
In two's complement
Adding is the same mechanism as plain positive integers adding.
Subtracting doesn't change too
Multiplication too!
Division does require a different mechanism.
All these are true because two's complement is just normal modular arithmetic, where we choose to look at some numbers as negative by subtracting the modulo.
Reading the answers to this question, I came across this comment [edited].
2's complement of 0100(4) will be 1100. Now 1100 is 12 if I say normally. So,
when I say normal 1100 then it is 12, but when I say 2's complement 1100 then
it is -4? Also, in Java when 1100 (lets assume 4 bits for now) is stored then
how it is determined if it is +12 or -4 ?? – hagrawal Jul 2 at 16:53
In my opinion, the question asked in this comment is quite interesting and so I'd like first of all to rephrase it and then to provide an answer and an example.
QUESTION – How can the system establish how one or more adjacent bytes have to be interpreted? In particular, how can the system establish whether a given sequence of bytes is a plain binary number or a 2's complement number?
ANSWER – The system establishes how to interpret a sequence of bytes through types.
Types define
how many bytes have to be considered
how those bytes have to be interpreted
EXAMPLE – Below we assume that
char's are 1 byte long
short's are 2 bytes long
int's and float's are 4 bytes long
Please note that these sizes are specific to my system. Although pretty common, they can be different from system to system. If you're curious of what they are on your system, use the sizeof operator.
First of all we define an array containing 4 bytes and initialize all of them to the binary number 10111101, corresponding to the hexadecimal number BD.
// BD(hexadecimal) = 10111101 (binary)
unsigned char l_Just4Bytes[ 4 ] = { 0xBD, 0xBD, 0xBD, 0xBD };
Then we read the array content using different types.
unsigned char and signed char
// 10111101 as a PLAIN BINARY number equals 189
printf( "l_Just4Bytes as unsigned char -> %hi\n", *( ( unsigned char* )l_Just4Bytes ) );
// 10111101 as a 2'S COMPLEMENT number equals -67
printf( "l_Just4Bytes as signed char -> %i\n", *( ( signed char* )l_Just4Bytes ) );
unsigned short and short
// 1011110110111101 as a PLAIN BINARY number equals 48573
printf( "l_Just4Bytes as unsigned short -> %hu\n", *( ( unsigned short* )l_Just4Bytes ) );
// 1011110110111101 as a 2'S COMPLEMENT number equals -16963
printf( "l_Just4Bytes as short -> %hi\n", *( ( short* )l_Just4Bytes ) );
unsigned int, int and float
// 10111101101111011011110110111101 as a PLAIN BINARY number equals 3183328701
printf( "l_Just4Bytes as unsigned int -> %u\n", *( ( unsigned int* )l_Just4Bytes ) );
// 10111101101111011011110110111101 as a 2'S COMPLEMENT number equals -1111638595
printf( "l_Just4Bytes as int -> %i\n", *( ( int* )l_Just4Bytes ) );
// 10111101101111011011110110111101 as a IEEE 754 SINGLE-PRECISION number equals -0.092647
printf( "l_Just4Bytes as float -> %f\n", *( ( float* )l_Just4Bytes ) );
The 4 bytes in RAM (l_Just4Bytes[ 0..3 ]) always remain exactly the same. The only thing that changes is how we interpret them.
Again, we tell the system how to interpret them through types.
For instance, above we have used the following types to interpret the contents of the l_Just4Bytes array
unsigned char: 1 byte in plain binary
signed char: 1 byte in 2's complement
unsigned short: 2 bytes in plain binary notation
short: 2 bytes in 2's complement
unsigned int: 4 bytes in plain binary notation
int: 4 bytes in 2's complement
float: 4 bytes in IEEE 754 single-precision notation
[EDIT] This post has been edited after the comment by user4581301. Thank you for taking the time to drop those few helpful lines!
Two's complement is used because it is simpler to implement in circuitry and also does not allow a negative zero.
If there are x bits, two's complement will range from +(2^x/2+1) to -(2^x/2). One's complement will run from +(2^x/2) to -(2^x/2), but will permit a negative zero (0000 is equal to 1000 in a 4 bit 1's complement system).
It's worthwhile to note that on some early adding machines, before the days of digital computers, subtraction would be performed by having the operator enter values using a different colored set of legends on each key (so each key would enter nine minus the number to be subtracted), and press a special button would would assume a carry into a calculation. Thus, on a six-digit machine, to subtract 1234 from a value, the operator would hit keys that would normally indicate "998,765" and hit a button to add that value plus one to the calculation in progress. Two's complement arithmetic is simply the binary equivalent of that earlier "ten's-complement" arithmetic.
The advantage of performing subtraction by the complement method is reduction in the hardware
complexity.The are no need of the different digital circuit for addition and subtraction.both
addition and subtraction are performed by adder only.
I have a slight addendum that is important in some situations: two's compliment is the only representation that is possible given these constraints:
Unsigned numbers and two's compliment are commutative rings with identity. There is a homomorphism between them.
They share the same representation, with a different branch cut for negative numbers, (hence, why addition and multiplication are the same between them.)
The high bit determines the sign.
To see why, it helps to reduce the cardinality; for example, Z_4.
Sign and magnitude and ones' compliment both do not form a ring with the same number of elements; a symptom is the double zero. It is therefore difficult to work with on the edges; to be mathematically consistent, they require checking for overflow or trap representations.
Well, your intent is not really to reverse all bits of your binary number. It is actually to subtract each its digit from 1. It's just a fortunate coincidence that subtracting 1 from 1 results in 0 and subtracting 0 from 1 results in 1. So flipping bits is effectively carrying out this subtraction.
But why are you finding each digit's difference from 1? Well, you're not. Your actual intent is to compute the given binary number's difference from another binary number which has the same number of digits but contains only 1's. For example if your number is 10110001, when you flip all those bits, you're effectively computing (11111111 - 10110001).
This explains the first step in the computation of Two's Complement. Now let's include the second step -- adding 1 -- also in the picture.
Add 1 to the above binary equation:
11111111 - 10110001 + 1
What do you get? This:
100000000 - 10110001
This is the final equation. And by carrying out those two steps you're trying to find this, final difference: the binary number subtracted from another binary number with one extra digit and containing zeros except at the most signification bit position.
But why are we hankerin' after this difference really? Well, from here on, I guess it would be better if you read the Wikipedia article.
We perform only addition operation for both addition and subtraction. We add the second operand to the first operand for addition. For subtraction we add the 2's complement of the second operand to the first operand.
With a 2's complement representation we do not need separate digital components for subtraction—only adders and complementers are used.
A major advantage of two's-complement representation which hasn't yet been mentioned here is that the lower bits of a two's-complement sum, difference, or product are dependent only upon the corresponding bits of the operands. The reason that the 8 bit signed value for -1 is 11111111 is that subtracting any integer whose lowest 8 bits are 00000001 from any other integer whose lowest 8 bits are 0000000 will yield an integer whose lowest 8 bits are 11111111. Mathematically, the value -1 would be an infinite string of 1's, but all values within the range of a particular integer type will either be all 1's or all 0's past a certain point, so it's convenient for computers to "sign-extend" the most significant bit of a number as though it represented an infinite number of 1's or 0's.
Two's-complement is just about the only signed-number representation that works well when dealing with types larger than a binary machine's natural word size, since when performing addition or subtraction, code can fetch the lowest chunk of each operand, compute the lowest chunk of the result, and store that, then load the next chunk of each operand, compute the next chunk of the result, and store that, etc. Thus, even a processor which requires all additions and subtractions to go through a single 8-bit register can handle 32-bit signed numbers reasonably efficiently (slower than with a 32-bit register, of course, but still workable).
When using of the any other signed representations allowed by the C Standard, every bit of the result could potentially be affected by any bit of the operands, making it necessary to either hold an entire value in registers at once or else follow computations with an extra step that would, in at least some cases, require reading, modifying, and rewriting each chunk of the result.
There are different types of representations those are:
unsigned number representation
signed number representation
one's complement representation
Two's complement representation
-Unsigned number representation used to represent only positive numbers
-Signed number representation used to represent positive as well as a negative number. In Signed number representation MSB bit represents sign bit and rest bits represents the number. When MSB is 0 means number is positive and When MSB is 1 means number is negative.
Problem with Signed number representation is that there are two values for 0.
Problem with one's complement representation is that there are two values for 0.
But if we use Two's complement representation then there will only one value for 0 that's why we represent negative numbers in two's complement form.
Source:Why negative numbers are stored in two's complement form bytesofgigabytes
One satisfactory answer of why Two2's Complement is used to represent negative numbers rather than One's Complement system is that
Two's Complement system solves the problem of multiple representations of 0 and the need for end-around-carry which exist in the One's complement system of representing negative numbers.
For more information Visit https://en.wikipedia.org/wiki/Signed_number_representations
For End-around-carry Visit
https://en.wikipedia.org/wiki/End-around_carry