How to know if it is either a positive/negative number or it is referring to a number in binary? - binary

I'm learning Integer data formats in a computer science book and as far as I understand that binary representation of a integer whether it is positive or negative is to have the leftmost bit (msb) be either a 0 for positive or 1 for negative, lets say in a 8-bit computer how would I know if it is talking about 10000010 - 130 in base 10 or if it is referring to negative 2?
I might be wrong, if i'm please correct me.

If you were to just see the string 10000010 somewhere, I don't know... written on a wall or something, how would you know how to interpret it?
You might say, hey, that's ten million and ten ( you thought it was base 10 ) or you might say hey, that's -126 ( you thought it was two's complement binary ), or you might say that's positive 130 ( you thought it was standard binary ).
It is, in a theoretical sense, up to whatever is doing the interpreting how it is interpreted.
So, when a computer is holding 8 bits of data, it's up to it how it interprets it.
Now if you're programming, you can tell the computer how you want something interpreted. For example, in c++
// char is 1 byte
unsigned char x = 130u;
Here I have told the compiler to put 130 unsigned into a byte, so the computer will store 10000010 and will interpret it as the value 130
Now consider
// char is 1 byte
char x = -126;
Here I have told the compiler to put -126 signed into a byte, so the computer will again store 10000010 but this time it will interpret it as the value -126.

Take a look at the answer posted to this question: Negative numbers are stored as 2's complement in memory, how does the CPU know if it's negative or positive?
The CPU uses something called an opcode in order to determine which function it will take when manipulating a memory location (in this case, the value 10000010). It is that function within the CPU that will either manipulate it as a negative or a positive number. The CPU doesn't have access to whether or not the number is signed or unsigned - it uses the op code when manipulating that number to determine whether or not it should be a signed or unsigned operation.

Related

can we interpret negative binary as positive too(read the question please)?

I already know the concept of negative binary numbers, The 0 at the position most significant bit represents that the binary is positive and 1 at the position of most significant bit represents that the binary is negative.
BUT THE PROBLEM THAT INTIMIDATED ME TO ASK A QUESTION ON STACKOVERFLOW IS THIS:
what about the times that we might want to represent a huge number that it's representation has occurred to have 1 in msb.
let me explain it in this way: by considering the above rule for making negative counterparts of our binary numbers we could say that ;
in an 8-bit system we have, For example, a value of positive 12 (decimal) would be written as 00001100 in binary, but negative 12 (decimal) would be written as
10001100 but what makes me confused a bit is that 10001100 could also be interpreted as 268 in decimal while we know that its the negative form of 12 in binary using this method of conversion.
I just want to know how to deal with this tricky, two-faced possible ways of interpreting a binary number, just like the example i gave above(it seem's to be negative, OH! but wait it might also not be:).
It depends on the type you use. If you're using an 8-bit representation which is signed, then the largest number you can store is 1111111 (i.e. the first bit is set aside).
In our example, that would convert to an integer value of 127.
If you use an unsigned type, then you have the extra bit available, allowing you to store 11111111, or 255 expressed as an integer.
A strongly typed language with a good compiler should catch you trying to assign, say, 134 to a signed 8 bit integer and vomit errors all over you.
If you're doing something strange fiddling around with bits yourself, you're on your own! There's no way of reconstructing, post hoc, whether it was intended to be a negative or a large positive, so you'll have to choose a system and stick with it.
The general convention nowadays is to stick with signed representations always - although I have seen code (usually for extreme compute tasks like astrophysical calculations) use unsigned values simply to save memory. And of course images will use unsigned values by convention, usually.

Binary Numbers - Difference between 15 and -1

I was learning binary numbers and 2's complement.
Lets say I have the binary number 1111. This is 15, but is also -1 (got from 2's complement method).
Can you explain how do i tell if it is 15 or -1?
Depends on the data type you use. Most programming languages offer signed and unsigned types.
A series of bits means nothing without a data type. E.g. an unsigned Int16 would contain only positive numbers up to 16 bits, while a signed Int16 would also contain negative numbers (but of course, less positive ones).
It's a matter of definition. If I write 10, you could read ten (decimal) or two (binary) or a whole bunch of other numbers, depending on the number system. If you don't know which system I use, there's no way you can tell what I mean. In your case, 15 is an answer in a unsigned binary system, -1 is an answer in a 2's compliment binary system.
If the register is 4-bit 2's complement, then the maximum range of values possible to be achieved is -8 to 7, so 15 is out of the question. For 15 to be represented, an unsigned register has to be used.

How does the computer recognise that a a given number is in its Two's comeplent form?

I understand what the Two's complement is and what it is useful for. What I'd like to know is how does the computer decide that the number is in its Two's complement form?
How and when does it decide that 1111 1110 is -2 and not 254? Is it at the OS level of processing?
As far as I think it is dependable on programming language.
Lets say integer allocates 1 byte of memory (to make it simple).
If it is UNSIGNED integer (only positive numbers) you can use any number from 0 to 255 (in total of 2^8 numbers, zero included).
00000000 would be 0, and
11111111 would be 255 decimal.
But if your integer is SIGNED ( u can use both, negative and positive numbers) you can use values from -127 to 127, zero included (again 2^8 numbers).
If your compiler bumps into 11111111 SIGNED int value, it will not interpret it as 255 because signed int allows only values from 0 to 127 for positive numbers so it will take it as -1. Next one, -2 would be 11111110 (254 decimal) and so on...
The computer will already be expecting the data to be in (or not in) two's complement form (otherwise there wouldn't be a way of telling if it is - 2 or 254). And yes, that would probably be decided at the OS-level.
You can probably relate this to the same kind of idea used when setting variable types when declaring variables in a high-level programming language; you'll more than likely set the type to be "decimal", for example, or "integer" and then the compiler will expect values to stick to this type.

negative integers in binary

5 (decimal) in binary 00000101
-5 (two's complement) in binary 11111011
but 11111011 is also 251 (decimal)!
How does computer discern one from another??
How does it know whether it's -5 or 251??
it's THE SAME 11111011
Thanks in advance!!
Signed bytes have a maximum of 127.
Unsigned bytes cannot be negative.
The compiler knows whether the variable holding that value is of signed or unsigend type, and treats it appropriately.
If your program chooses to treat the byte as signed, the run-time system decides whether the byte is to be considered positive or negative according to the high-order bit. A 1 in that high-order bit (bit 7, counting from the low-order bit 0) means the number is negative; a 0 in that bit position means the number is positive. So, in the case of 11111011, bit 7 is set to 1 and the number is treated, accordingly, as negative.
Because the sign bit takes up one bit position, the absolute magnitude of the number can range from 0 to 127, as was said before.
If your program chooses to treat the byte as unsigned, on the other hand, what would have been the sign bit is included in the magnitude, which can then range from 0 to 255.
Two's complement is designed to allow signed numbers to be added/substracted to one another in the same way unsigned numbers are. So there are only two cases where the signed-ness of numbers affect the computer at low level.
when there are overflows
when you are performing operations on mixed: one signed, one unsigned
Different processors take different tacks for this. WRT orverflows, the MIPS RISC architecture, for example, deals with overflows using traps. See http://en.wikipedia.org/wiki/MIPS_architecture#MIPS_I_instruction_formats
To the best of my knowledge, mixing signed and unsigned needs to avoided at a program level.
If you're asking "how does the program know how to interpret the value" - in general it's because you've told the compiler the "type" of the variable you assigned the value to. The program doesn't actually care if 00000101 as "5 decimal", it just has an unsigned integer with value 00000101 that it can perform operations legal for unsigned integers upon, and will behave in a given manner if you try to compare with or cast to a different "type" of variable.
At the end of the day everything in programming comes down to binary - all data (strings, numbers, images, sounds etc etc) and the compiled code just ends up as a large binary blob.

Extracting a bit-field from a signed number

I have signed numbers (2s complement) stored in 32-bit integers, and I want to extract 16-bit fields from them. Is it true that if I extract the low 16 bits from a 32-bit signed number, the result will be correct as long as the original (32-bit) number fits into 16 bits ?
For positive numbers it is trivially true, and it seems that for negatives as well. But can it be proven ?
Thanks in advance
Yes, in two's complement the sign bits extend "all the way" to the left. When you cast a signed short to a signed int then the number is "sign extended" and has the same value.
Example: Nibble(-2) = 1110 => Byte(-2) = 1111_1110
Obviously the opposite it true too, if you capture at least one sign bit then the value of the number remains unchanged.
From my (second) reading of your question, it doesn't seem as if you need to "extract" any bits, but rather convert the whole number?
I.e. do something like this:
int negative = -4711;
short x = (short) negative;
In this case, the compiler will make sure that as much as possible of the original number's precision gets converted in the assignment. This would be the case even if the underlying hardware was not using 2:s complement. If it is, then this is likely to just be a truncation, as explained by Motti.