For unsigned integers we can represent numbers ranging from
0 to 2N - 1
And for signed integers like two's complement the range is
-2(N-1) to 2(N-1) - 1
How is the signed range calculated?
For unsigned integers :
we use all N bits to represent numbers ranging from 0 to 2^N - 1 ,its because if we use all N bit positions and calculate different configuration by placing either 0 or 1 in each position, we can get at max integer 2^N - 1 (which is 11...upto N times) and lowest integer as 0 ( which is 00..upto N times). And, hence all values from 0 to 2^N - 1 can be represented.
For Signed integers :
Here we basically use N-1 bits to represent integer numbers and 1 bit is exclusively reserved for determining sign of that integer. So, we can represent numbers ranging from -2^(N-1) to 2^(N-1) - 1. The most significant bit as 1 represent the negative integers whereas most significant bit as 0 represent non negative integer values. Here, we can represent negative from -1 to -2^(N-1) from the fact that we may utilize number with all N bits as 1. Basically compilers use two's complement representation to represent integers.
Here's the deal. Let's do it for 2 bits, so N=2.
We get Range=-2 to 1, ie it can represent -2,-1,0,1.
Now if N=4, we have Range=-8 to +7.
I can't understand where are you stuck.
Related
I am currently learning SQL.
When looking at the INT, I came to the understanding that an INT type is 4 bytes long, which translates to 8 bits each byte, leading to each INT being 32 bits.
However, for INT it is said that the max value for unsigned types is (2^32)-1 where the -1 is accounting for 0 value. I understand that the 32 comes from the fact that each int is 32 bits.
My question is where does the 2 come from in the calculation?
My intuition is telling me that each bit will have some sort of measure valued at 2.
int is actually a signed value in SQL. The range is from -2^31 through 2^31 - 1, which is -2,147,483,648 to 2,147,483,647. There are exactly 2^32 possible values in tis range. Note that it includes 0.
An unsigned integer would range from 0 to 2^32-1, that is up to 4,294,967,295. The - 1s are because 0 is included in the range, so the counting starts at 0 rather than 1.
The range of possible values is easily seen at with fewer bits. For instance, 3 bits can represent the values from -4 to 3:
Bits Unisgned Signed
000 0 0
001 1 1
010 2 2
011 3 3
100 4 -4
101 5 -3
110 6 -2
111 7 -1
Computers use binary system for storing values. Let's try analogy between binary and decimal system:
Consider 10-based (decimal) system. If You have number with 32 decimal places, every place having value 0-9, You have 10^32 possible values (obviously enough; we use this system on daily basis).
Now consider 2-based system, which is the one used by computers (for practical reasons - two states are easiest to distinguish and wiring logic is simplest). Every place (bit) has value 0-1, so there are 2^32 possible values.
Suppose you have 14 bits. How do you determine how many integers can be represented in binary from those 14 bits?
Is it simply just 2^n? So 2^14 = 16384?
Please note this part of the question: "how many INTEGERS can be represented in BINARY...". That's where my confusion lies in what otherwise seems like a fairly straightforward question. If the question was just asking how many different values or numbers can be represented from 14 bits, than yes, I'm certain it's just 2^n.
The answer depends on whether you need signed or unsigned integers.
If you need unsigned integers then using 2^n you can represent integers from 0 to 2^n exclusive. e.g. n=2; 2^2=4 you can represent the integers from 0 to 4 exclusive (0 to 3 inclusive). Therefore with n bits, you can represent a maximum unsigned integer value of 2^n - 1, but a total count of 2^n different integers including 0.
If you need signed integers, then half of the values are negative and half of the values are positive and 1 bit is used to indicate whether the integer is positive or negative. You then calculate using using 2^n/2. e.g. n=2; 2^2/2=2 you can represent the integers from -2 to 2 exclusive (-2 to +1 inclusive). 0 is considered postive, so you get 2 negative values (-2, -1) and 2 positive values (0 and +1). Therefore with n bits, you can represent signed integers values between (-) 2^n/2 and (+) 2^n/n - 1, but you still have a total count of 2^n different integers as you did with the unsigned integers.
Yes, it's that easy as 2^n.
A bit can have 2 distinct values: 0 and 1.
If you have 2 bits, than you have 4 distinct values: 00, 01, 10, 11. The list goes on.
Combinatorics has the simple counting formula
N = n_1 ⋅ n_2 ⋅ ... ⋅ n_k
Since n_1 = n_2 = n_k = 2 you can reduce the formula to
N = 2 ^ k
Here's a question I've come across:
Assume each X represents one bit, either 0 or 1. Consider the 8-bit unsigned binary numbers A = 1XXX XXXX and B = 0XXX XXXX. Which of the following are true (you may tick more than one answer):
A B > A
B A > 127
C Can't tell which one A or B is larger
D B < 127
E A > B
Explanations needed (0 understanding on this). Thanks!
The key to the answer is in the word unsigned. This means that the MSB (left most bit) is not being used to indicate the results sign. Processors perform mathematical operations such as add, subtract and comparison on numbers using twos compliment, this means that to know what the numeric value of a binary word is we must know if it is signed (can contain negative values) or unsigned (positive numbers only).
So in the above case the values are unsigned, which means A is always greater than B and that A has the MSB of an 8 bit value set to 1 so must be at least 128.
In the same way that we count in units of 10s binary works in units of two:
Binary
128 64 32 16 8 4 2 1
Decimal
1000 100 10 1
However if the binary value were signed the left most bit would be used to express positve (0) or negative (1) and when negative we need to invert the value and add one to get back to the (Negative) result.
For example, if n=9, then how many different values can be represented in 9 binary digits (bits)?
My thinking is that if I set each of those 9 bits to 1, I will make the highest number possible that those 9 digits are able to represent. Therefore, the highest value is 1 1111 1111 which equals 511 in decimal. I conclude that, therefore, 9 digits of binary can represent 511 different values.
Is my thought process correct? If not, could someone kindly explain what I'm missing? How can I generalize it to n bits?
29 = 512 values, because that's how many combinations of zeroes and ones you can have.
What those values represent however will depend on the system you are using. If it's an unsigned integer, you will have:
000000000 = 0 (min)
000000001 = 1
...
111111110 = 510
111111111 = 511 (max)
In two's complement, which is commonly used to represent integers in binary, you'll have:
000000000 = 0
000000001 = 1
...
011111110 = 254
011111111 = 255 (max)
100000000 = -256 (min) <- yay integer overflow
100000001 = -255
...
111111110 = -2
111111111 = -1
In general, with k bits you can represent 2k values. Their range will depend on the system you are using:
Unsigned: 0 to 2k-1
Signed: -2k-1 to 2k-1-1
What you're missing: Zero is a value
A better way to solve it is to start small.
Let's start with 1 bit. Which can either be 1 or 0. That's 2 values, or 10 in binary.
Now 2 bits, which can either be 00, 01, 10 or 11 That's 4 values, or 100 in binary... See the pattern?
Okay, since it already "leaked": You're missing zero, so the correct answer is 512 (511 is the greatest one, but it's 0 to 511, not 1 to 511).
By the way, an good followup exercise would be to generalize this:
How many different values can be represented in n binary digits (bits)?
Without wanting to give you the answer here is the logic.
You have 2 possible values in each digit. you have 9 of them.
like in base 10 where you have 10 different values by digit say you have 2 of them (which makes from 0 to 99) : 0 to 99 makes 100 numbers. if you do the calcul you have an exponential function
base^numberOfDigits:
10^2 = 100 ;
2^9 = 512
There's an easier way to think about this. Start with 1 bit. This can obviously represent 2 values (0 or 1). What happens when we add a bit? We can now represent twice as many values: the values we could represent before with a 0 appended and the values we could represent before with a 1 appended.
So the the number of values we can represent with n bits is just 2^n (2 to the power n)
The thing you are missing is which encoding scheme is being used. There are different ways to encode binary numbers. Look into signed number representations. For 9 bits, the ranges and the amount of numbers that can be represented will differ depending on the system used.
It is most important that it be accurate, but also it should take the least disk space possible.
You would need a DECIMAL(6,5) to store a number from 0 to 1 with 5 decimal places.
The declaration syntax for a DECIMAL column is DECIMAL(M,D). The ranges of values for the arguments in MySQL 5.1 are as follows:
M is the maximum number of digits (the precision). It has a range of 1 to 65.
D is the number of digits to the right of the decimal point (the scale). It has a range of 0 to 30 and must be no larger than M.
According to this, in MySQL 5.0.3
DECIMAL(5,5) or DECIMAL (6,6) should take 3 bytes.
DECIMAL(4,4) 2 bytes.
If you need to store values from 0 to 1 inclusive, you might be tempted to use DECIMAL(6,5). But that occupies 4 bytes as integer and float parts are stored separately and you need one byte for integer and three for 5 decimal digits. And if you have 4 bytes you might as well use FLOAT.
Before MySql 5 DECIMALs were stored as strings and the most efficient way was to store SMALLINT or MEDIUMINT (2 or 3 bytes) and manually divide it by 10000 or 1000000 respectively.