2's complement of 2 bit numbers - binary

I was learning logic system Design and came across 2's complement. So I had a doubt what is the 2's complement of 00 or any other 2 bit numbers. I couldn't find any answers on the internet.

Related

Sign Magnitude Disadvantages

My prof told me that these 2 questions were wrong on a test, but I fully disagree with him. Here are the questions
Which of the following is the most important drawback of the 1's complement
method?
A) All cases
B) end-around-carry-bit (Carry-Out addition occurs in 1's complement arithmetic
operations.
C) the design and implementation of digital circuits for calculations with this
method are complicated.
D) O has two different representation one is -0 and second is +O
For this question, I chose "all cases", since the biggest disadvantage of using 1's complement is that there is indeed an end-around-carry, AND it has a +0 and a -0. I've also read that one's complement needs more computing power, thus, more complexity to be performed.
The second question was
Which of the following is the most important drawback of the Sign-Magnitude
method?
A) 0 has two different representation one is -O and second is +0
B) All cases
C) Designing and implementing digital circuits to perform calculations through the
Sign-Magnitude method is very complicated
D) It is challenging for humans to understand the Sign-Magnitude method
I chose "0 has 2 representations", since this is the biggest issue with sign-magnitude calculation (that's why we don't use it much anymore), and because of this issue, more circuits are required.
If I'm wrong, could you please explain why? Thank you so much and have a great day!

Can I present numbers from numbers from 0 to 255 in less than 8 bits?

I know that it takes 8 bits to demonstrate a number like 255 in binary system. I'm desperately looking for a way of storing numbers from 0 to 255 (especially from 90 to 255) in less than 8 bits. Anything can be helpful, like coordinate systems, spirals, compressions, etc.
I need to store a number up to 255 in less than 8 bits (1 byte).
No, not all of them.
You can represent some of them in less than eight bits, and others in more than eight bits. Such a representation could result in an overall compression, if the frequency of occurrence of the byte values is heavily biased in the direction of the ones represented with fewer bits.
Your "especially from 90 to 255" sounds like a small bias. If you could assure that only those values are present, then they could be represented in a little less than 7.38 bits each, on average.

Signed integer convertion

What is -10234(10) in binary with a fixed width of 16 bits in 1) one's complement 2) two's complement 3) signed magnitude.
Please help me step by step, I feel confused about the above three. Many thanks.
That sounds like a homework problem. I'm not going to do your homework for you, because the goal is for you to learn, but I can explain the stuff in my own words for you. In my experience, most of the people who get lost on this stuff just need to hear things said in a way that works for them, rather than having the same thing repeated.
The first thing that you need to understand for this is what the positive of that number is in base 2. Since the problem said you have 16 bits to handle the signed version in, you'll only have 15 bits to get this done.
As far as how to make it negative...
When you're doing signed magnitude, you would have one of those bits signal whether it was positive or negative. For an example, I'll do 4 bits of signed magnitude. Our number starts off as 3, that is 0011. The signed bit is always the most significant bit, so -3 would be 1011.
When you're doing one's complement, you just flip all of the bits. (So if you had an 8 bit one's complement number that's currently positive - let's say it's 25(9+1) or 00011001(1+1), to make that 25 in one's complement, you'd flip all of those bits, so -25(9+1) is 11100110(1+1) in one's complement.
Two's complement is the same sort of thing, except that rather than having all 1s (11111111(1+1) for the 8 bit version be -0, a number we rarely care to distinguish from +0, it adjusts all of the negative numbers by one so that's now -1.
Note that I'm giving the bases in the form of number +1, because every base is base 10 in that base. But that's me, a grizzled computer professional; if you're still in school, represent bases the way your instructor tells you to, but understand they're crazy. (I can prove they're crazy: 1. They're human. 2. QED. In future years when some people are just learning from AIs, the proof is slightly more complicated. 1. They were made, directly or indirectly by humans. 2 All humans are crazy. 3. QED.)

how does a computer work out if a value is greater than?

I understand basic binary logic and how to do basic addition, subtraction etc. I get that each of the characters in this text is just a binary number representing a number in a charset. The numbers dont really mean anything to the computer. I'm confused however as to how a computer works out that a number is greater than another. what does it do at the bit level?
If you have two numbers, you can compare each bit, from most significant to least significant, using a 1-bit comparator gate:
Of course n-bit comparator gates exist and are described further here.
It subtracts one from the other and sees if the result is less than 0 (by checking the highest-order bit, which is 1 on a number less than 0 since computers use 2's complement notation).
http://academic.evergreen.edu/projects/biophysics/technotes/program/2s_comp.htm
It substracts the two numbers and checks if the result is positive, negative (highest bit - aka "the minus bit" is set), or zero.
Within the processor, often there will be microcode to do operations, using hardwired options, such as add/subtract, that is already there.
So, to do a comparison of an integer the microcode can just do a subtraction, and based on the result determine if one is greater than the other.
Microcode is basically just low-level programs that will be called by assembly, to make it look like there are more commands than is actually hardwired on the processor.
You may find this useful:
http://www.osdata.com/topic/language/asm/intarith.htm
I guess it does a bitwise comparison of two numbers from the most significant bit to the least significant bit, and when they differ, the number with the bit set to "1" is the greater.
In a Big-endian architecture, the comparison of the following Bytes:
A: 0010 1101
B: 0010 1010
would result in A being greatest than B for its 6th bit (from the left) is set to one, while the precedent bits are equal to B.
But this is just a quick theoretic answer, with no concerns about floating point numbers and negative numbers.

Why is the one's complement representation better than others?

It seems like one's complement representation of signed numbers is the most popular now (and probably the only representation used in the modern hardware). Why is it exactly better than others?
Actually the dominant representation is two's complement.
Representation methods include:
- signed magnitude
- one's complement
- two's complement
One's complement replaced signed magnitude because the circuitry to implement it was much simpler.
One's complement has 2 representations for zero which complicates programming since it needs to test for -0 and +0.
This problem is not present in two's complement (has one value for 0) which is the dominant representation used universally today.
This question starts with a false premise. Two's complement is superior and common. This is because it doesn't have two representations of zero and and the hardware is simpler because the circuity doesn't need to check the sign before performing addition and subtraction.