Why is the one's complement representation better than others? - binary

It seems like one's complement representation of signed numbers is the most popular now (and probably the only representation used in the modern hardware). Why is it exactly better than others?

Actually the dominant representation is two's complement.
Representation methods include:
- signed magnitude
- one's complement
- two's complement
One's complement replaced signed magnitude because the circuitry to implement it was much simpler.
One's complement has 2 representations for zero which complicates programming since it needs to test for -0 and +0.
This problem is not present in two's complement (has one value for 0) which is the dominant representation used universally today.

This question starts with a false premise. Two's complement is superior and common. This is because it doesn't have two representations of zero and and the hardware is simpler because the circuity doesn't need to check the sign before performing addition and subtraction.

Related

Sign Magnitude Disadvantages

My prof told me that these 2 questions were wrong on a test, but I fully disagree with him. Here are the questions
Which of the following is the most important drawback of the 1's complement
method?
A) All cases
B) end-around-carry-bit (Carry-Out addition occurs in 1's complement arithmetic
operations.
C) the design and implementation of digital circuits for calculations with this
method are complicated.
D) O has two different representation one is -0 and second is +O
For this question, I chose "all cases", since the biggest disadvantage of using 1's complement is that there is indeed an end-around-carry, AND it has a +0 and a -0. I've also read that one's complement needs more computing power, thus, more complexity to be performed.
The second question was
Which of the following is the most important drawback of the Sign-Magnitude
method?
A) 0 has two different representation one is -O and second is +0
B) All cases
C) Designing and implementing digital circuits to perform calculations through the
Sign-Magnitude method is very complicated
D) It is challenging for humans to understand the Sign-Magnitude method
I chose "0 has 2 representations", since this is the biggest issue with sign-magnitude calculation (that's why we don't use it much anymore), and because of this issue, more circuits are required.
If I'm wrong, could you please explain why? Thank you so much and have a great day!

Signed integer convertion

What is -10234(10) in binary with a fixed width of 16 bits in 1) one's complement 2) two's complement 3) signed magnitude.
Please help me step by step, I feel confused about the above three. Many thanks.
That sounds like a homework problem. I'm not going to do your homework for you, because the goal is for you to learn, but I can explain the stuff in my own words for you. In my experience, most of the people who get lost on this stuff just need to hear things said in a way that works for them, rather than having the same thing repeated.
The first thing that you need to understand for this is what the positive of that number is in base 2. Since the problem said you have 16 bits to handle the signed version in, you'll only have 15 bits to get this done.
As far as how to make it negative...
When you're doing signed magnitude, you would have one of those bits signal whether it was positive or negative. For an example, I'll do 4 bits of signed magnitude. Our number starts off as 3, that is 0011. The signed bit is always the most significant bit, so -3 would be 1011.
When you're doing one's complement, you just flip all of the bits. (So if you had an 8 bit one's complement number that's currently positive - let's say it's 25(9+1) or 00011001(1+1), to make that 25 in one's complement, you'd flip all of those bits, so -25(9+1) is 11100110(1+1) in one's complement.
Two's complement is the same sort of thing, except that rather than having all 1s (11111111(1+1) for the 8 bit version be -0, a number we rarely care to distinguish from +0, it adjusts all of the negative numbers by one so that's now -1.
Note that I'm giving the bases in the form of number +1, because every base is base 10 in that base. But that's me, a grizzled computer professional; if you're still in school, represent bases the way your instructor tells you to, but understand they're crazy. (I can prove they're crazy: 1. They're human. 2. QED. In future years when some people are just learning from AIs, the proof is slightly more complicated. 1. They were made, directly or indirectly by humans. 2 All humans are crazy. 3. QED.)

Why are binary code converters needed?

I was wondering why a computer would need binary code converters to convert from BCD to Excess-3 for example. Why is this necessary can't computers just use one form of binary code.
Some older forms of binary representation persist even after a newer, "better" form comes into use. For example, legacy hardware that is still in use running legacy code that would be too costly to rewrite. Word lengths were not standardized in the early years of computing, so machines with words varying from 5 to 12 bits in length naturally will require different schemes for representing the same numbers.
In some cases, a company might persist in using a particular representation for self-compatibility (i.e., with the company's older products) reasons, or because it's an ingrained habit or "the company way." For example, the use of big-endian representation in Motorola and PowerPC chips vs. little-endian representation in Intel chips. (Though note that many PowerPC processors support both types of endian-ness, even if manufacturers typically only use one in a product.)
The previous paragraph only really touches upon byte ordering, but that can still be an issue for data interchange.
Even for BCD, there are many ways to store it (e.g., 1 BCD digit per word, or 2 BCD digits packed per byte). IBM has a clever representation called zoned decimal where they store a value in the high-order nybble which, combined with the BCD value in the low-order nybble, forms an EBCDIC character representing the value. This is pretty useful if you're married to the concept of representing characters using EBCDIC instead of ASCII (and using BCD instead of 2's complement or unsigned binary).
Tangentially related: IBM mainframes from the 1960s apparently converted BCD into an intermediate form called qui-binary before performing an arithmetic operation, then converted the result back to BCD. This is sort of a Rube Goldberg contraption, but according to the linked article, the intermediate form gives some error detection benefits.
The IBM System/360 (and probably a bunch of newer machines) supported both packed BCD and pure binary representations, though you have to watch out for IBM nomenclature — I have heard an old IBMer refer to BCD as "binary," and pure binary (unsigned, 2's complement, whatever) as "binary coded hex." This provides a lot of flexibility; some data may naturally be best represented in one format, some in the other, and the machine provides instructions to convert between forms conveniently.
In the case of floating point arithmetic, there are some values that cannot be represented exactly in binary floating point, but can be with BCD or a similar representation. For example, the number 0.1 has no exact binary floating point equivalent. This is why BCD and fixed-point arithmetic are preferred for things like representing amounts of currency, where you need to exactly represent things like $3.51 and can't allow floating point error to creep in when adding.
Intended application is important. Arbitrary precision arithmetic will require a different representation strategy compared to the fixed-width registers in your CPU (e.g., Java's BigDecimal class). Many languages support arbitrary precision (e.g., Scheme, Haskell), though the underlying implementation of arbitrary precision numbers varies. I'm honestly not sure what is preferable for arbitrary precision, a BCD-type scheme or a denser pure binary representation. In the case of Java's BigDecimal, conversion from binary floating point to BigDecimal is best done by first converting to a String — this makes such conversions potentially inefficient, so you really need to know ahead of time whether float or double is good enough, or whether you really need arbitrary precision, and when.
Another tangent: Groovy, a JVM language, quietly treats all floating point numeric literals in code as BigDecimal values, and uses BigDecimal arithmetic in preference to float or double. That's one reason Groovy is very popular with the insurance industry.
tl;dr There is no one-size-fits-all numeric data type, and as long as that remains the case (probably the heat death of the universe), you'll need to convert between representations.

how does a computer work out if a value is greater than?

I understand basic binary logic and how to do basic addition, subtraction etc. I get that each of the characters in this text is just a binary number representing a number in a charset. The numbers dont really mean anything to the computer. I'm confused however as to how a computer works out that a number is greater than another. what does it do at the bit level?
If you have two numbers, you can compare each bit, from most significant to least significant, using a 1-bit comparator gate:
Of course n-bit comparator gates exist and are described further here.
It subtracts one from the other and sees if the result is less than 0 (by checking the highest-order bit, which is 1 on a number less than 0 since computers use 2's complement notation).
http://academic.evergreen.edu/projects/biophysics/technotes/program/2s_comp.htm
It substracts the two numbers and checks if the result is positive, negative (highest bit - aka "the minus bit" is set), or zero.
Within the processor, often there will be microcode to do operations, using hardwired options, such as add/subtract, that is already there.
So, to do a comparison of an integer the microcode can just do a subtraction, and based on the result determine if one is greater than the other.
Microcode is basically just low-level programs that will be called by assembly, to make it look like there are more commands than is actually hardwired on the processor.
You may find this useful:
http://www.osdata.com/topic/language/asm/intarith.htm
I guess it does a bitwise comparison of two numbers from the most significant bit to the least significant bit, and when they differ, the number with the bit set to "1" is the greater.
In a Big-endian architecture, the comparison of the following Bytes:
A: 0010 1101
B: 0010 1010
would result in A being greatest than B for its 6th bit (from the left) is set to one, while the precedent bits are equal to B.
But this is just a quick theoretic answer, with no concerns about floating point numbers and negative numbers.

1's and 2's complement systems

I'm trying to understand the differences between these two systems and their impact on C programming.
From what I've learned from Wikipedia:
both systems are used to represent negative numbers
one's complement applies bitwise NOT to negative number (the system has +0 and -0)
two's complement does as in step 2 and adds 1 (eliminates +/-0)
Am I missing something else?
My questions:
which architectures support which system? What is the most common these days (1's or 2's complement)?
in what sense should we consider these systems when programming in C? Does it mainly make sense only in embedded world?
Thanks in advance!
Most systems nowadays use two's complement, since it lets the computer do the same exact operation for addition/subtraction without caring about the particular sign of the number.
When you're programming, the arithmetic works regardless of the system used -- the range of the data types are defined by the language, so if it says a type will work in the range -2^31 to +2^31 - 1, then it'll work regardless of the notation. You need to be careful when working with individual bits or bit shifts, though -- those won't behave like power-of-two arithmetic in non-two's complement systems (although you're not too likely to encounter such systems, and probably never will, if you're just working with PCs).
The only advantage of ones'-complement notation for integers is that it allows conversions to and from sign-magnitude form to be performed without a carry chain. Building a computer with a set of blinkenlights that show each register's value in sign-magnitude form will be much more convenient if the registers use ones'-complement form than if they use two's-complement form. If one wanted to use separate storage latches for the blinkenlights and the CPU's registers, the easiest way to accommodate things would be to have one circuit which translates two's-complement to one's-complement or sign-magnitude form, and then have each register write simultaneously store the two's-complement value in the register while updating the blinkenlight latches with the sign-magnitude value. Latching circuitry is sufficiently expensive, however, that if registers are being built out of discrete latches anyway, adding some circuitry to the ALU to make it use ones'-complement, and then feeding the lights from the CPU's "real" registers, may be cheaper than including an extra set of latches for the lights.
Over the last few decades, of course, the relative costs of different circuit elements have shifted to the point that it would be absurd to have lights wired to directly report the state of a CPU's registers. Consequently, the practical advantages that ones'-complement designs might have had in the past are no longer applicable.