INT type in SQL max value calculation - mysql

I am currently learning SQL.
When looking at the INT, I came to the understanding that an INT type is 4 bytes long, which translates to 8 bits each byte, leading to each INT being 32 bits.
However, for INT it is said that the max value for unsigned types is (2^32)-1 where the -1 is accounting for 0 value. I understand that the 32 comes from the fact that each int is 32 bits.
My question is where does the 2 come from in the calculation?
My intuition is telling me that each bit will have some sort of measure valued at 2.

int is actually a signed value in SQL. The range is from -2^31 through 2^31 - 1, which is -2,147,483,648 to 2,147,483,647. There are exactly 2^32 possible values in tis range. Note that it includes 0.
An unsigned integer would range from 0 to 2^32-1, that is up to 4,294,967,295. The - 1s are because 0 is included in the range, so the counting starts at 0 rather than 1.
The range of possible values is easily seen at with fewer bits. For instance, 3 bits can represent the values from -4 to 3:
Bits Unisgned Signed
000 0 0
001 1 1
010 2 2
011 3 3
100 4 -4
101 5 -3
110 6 -2
111 7 -1

Computers use binary system for storing values. Let's try analogy between binary and decimal system:
Consider 10-based (decimal) system. If You have number with 32 decimal places, every place having value 0-9, You have 10^32 possible values (obviously enough; we use this system on daily basis).
Now consider 2-based system, which is the one used by computers (for practical reasons - two states are easiest to distinguish and wiring logic is simplest). Every place (bit) has value 0-1, so there are 2^32 possible values.

Related

How many unique conditions can be defined using only two on and off mode switches? [duplicate]

For example, if n=9, then how many different values can be represented in 9 binary digits (bits)?
My thinking is that if I set each of those 9 bits to 1, I will make the highest number possible that those 9 digits are able to represent. Therefore, the highest value is 1 1111 1111 which equals 511 in decimal. I conclude that, therefore, 9 digits of binary can represent 511 different values.
Is my thought process correct? If not, could someone kindly explain what I'm missing? How can I generalize it to n bits?
29 = 512 values, because that's how many combinations of zeroes and ones you can have.
What those values represent however will depend on the system you are using. If it's an unsigned integer, you will have:
000000000 = 0 (min)
000000001 = 1
...
111111110 = 510
111111111 = 511 (max)
In two's complement, which is commonly used to represent integers in binary, you'll have:
000000000 = 0
000000001 = 1
...
011111110 = 254
011111111 = 255 (max)
100000000 = -256 (min) <- yay integer overflow
100000001 = -255
...
111111110 = -2
111111111 = -1
In general, with k bits you can represent 2k values. Their range will depend on the system you are using:
Unsigned: 0 to 2k-1
Signed: -2k-1 to 2k-1-1
What you're missing: Zero is a value
A better way to solve it is to start small.
Let's start with 1 bit. Which can either be 1 or 0. That's 2 values, or 10 in binary.
Now 2 bits, which can either be 00, 01, 10 or 11 That's 4 values, or 100 in binary... See the pattern?
Okay, since it already "leaked": You're missing zero, so the correct answer is 512 (511 is the greatest one, but it's 0 to 511, not 1 to 511).
By the way, an good followup exercise would be to generalize this:
How many different values can be represented in n binary digits (bits)?
Without wanting to give you the answer here is the logic.
You have 2 possible values in each digit. you have 9 of them.
like in base 10 where you have 10 different values by digit say you have 2 of them (which makes from 0 to 99) : 0 to 99 makes 100 numbers. if you do the calcul you have an exponential function
base^numberOfDigits:
10^2 = 100 ;
2^9 = 512
There's an easier way to think about this. Start with 1 bit. This can obviously represent 2 values (0 or 1). What happens when we add a bit? We can now represent twice as many values: the values we could represent before with a 0 appended and the values we could represent before with a 1 appended.
So the the number of values we can represent with n bits is just 2^n (2 to the power n)
The thing you are missing is which encoding scheme is being used. There are different ways to encode binary numbers. Look into signed number representations. For 9 bits, the ranges and the amount of numbers that can be represented will differ depending on the system used.

Binary and Bits

Here's a question I've come across:
Assume each X represents one bit, either 0 or 1. Consider the 8-bit unsigned binary numbers A = 1XXX XXXX and B = 0XXX XXXX. Which of the following are true (you may tick more than one answer):
A B > A
B A > 127
C Can't tell which one A or B is larger
D B < 127
E A > B
Explanations needed (0 understanding on this). Thanks!
The key to the answer is in the word unsigned. This means that the MSB (left most bit) is not being used to indicate the results sign. Processors perform mathematical operations such as add, subtract and comparison on numbers using twos compliment, this means that to know what the numeric value of a binary word is we must know if it is signed (can contain negative values) or unsigned (positive numbers only).
So in the above case the values are unsigned, which means A is always greater than B and that A has the MSB of an 8 bit value set to 1 so must be at least 128.
In the same way that we count in units of 10s binary works in units of two:
Binary
128 64 32 16 8 4 2 1
Decimal
1000 100 10 1
However if the binary value were signed the left most bit would be used to express positve (0) or negative (1) and when negative we need to invert the value and add one to get back to the (Negative) result.

How many Bits were used?

Assume a simple machine uses 4 bits to represent it instruction set. How many different instruction can this machine have? How many instruction could it have if eight bit are used? How many if 16 bits are used?
Sorry with the homework theory.. I didnt know how else to put it.. thanks
A bit can have two values: 0 or 1.
How many unique values are there of no bits? Just one. I'd show it here, but I don't know how to show no bits.
How many unique values are there of one bit? Two: 0 1
How many unique values are there of two bits? Four: 00 01 10 11
How many unique values are there of three bits? Eight: 000 001 010 011 100 101 110 111
Notice anything? Each time you add another bit, you double the number of values. You can represent that with this recursive formula:
unique_values(0) -> 1
unique_values(Bits) -> 2 * unique_values(Bits - 1)
This happens to be a recursive definition of "two to the power of," which can also be represented in this non-recursive formula:
unique_values = 2 ^ bits # ^ is exponentiation
Now you can compute the number of unique values that can be held by any number of bits, without having to count them all out. How many unique values can four bits hold? Two to the fourth power, which is 2 * 2 * 2 * 2 which is 16.
It's 2 to the power "bits". So
4 bits = 16 instructions
8 bits = 256 instructions
16 bits = 65536 instructions
You can have 2 raised to the power of the number of bits (since each bit can be 1 or zero). E.g. for the 4 bit computer: 2^4 = 16.

How many values can be represented with n bits?

For example, if n=9, then how many different values can be represented in 9 binary digits (bits)?
My thinking is that if I set each of those 9 bits to 1, I will make the highest number possible that those 9 digits are able to represent. Therefore, the highest value is 1 1111 1111 which equals 511 in decimal. I conclude that, therefore, 9 digits of binary can represent 511 different values.
Is my thought process correct? If not, could someone kindly explain what I'm missing? How can I generalize it to n bits?
29 = 512 values, because that's how many combinations of zeroes and ones you can have.
What those values represent however will depend on the system you are using. If it's an unsigned integer, you will have:
000000000 = 0 (min)
000000001 = 1
...
111111110 = 510
111111111 = 511 (max)
In two's complement, which is commonly used to represent integers in binary, you'll have:
000000000 = 0
000000001 = 1
...
011111110 = 254
011111111 = 255 (max)
100000000 = -256 (min) <- yay integer overflow
100000001 = -255
...
111111110 = -2
111111111 = -1
In general, with k bits you can represent 2k values. Their range will depend on the system you are using:
Unsigned: 0 to 2k-1
Signed: -2k-1 to 2k-1-1
What you're missing: Zero is a value
A better way to solve it is to start small.
Let's start with 1 bit. Which can either be 1 or 0. That's 2 values, or 10 in binary.
Now 2 bits, which can either be 00, 01, 10 or 11 That's 4 values, or 100 in binary... See the pattern?
Okay, since it already "leaked": You're missing zero, so the correct answer is 512 (511 is the greatest one, but it's 0 to 511, not 1 to 511).
By the way, an good followup exercise would be to generalize this:
How many different values can be represented in n binary digits (bits)?
Without wanting to give you the answer here is the logic.
You have 2 possible values in each digit. you have 9 of them.
like in base 10 where you have 10 different values by digit say you have 2 of them (which makes from 0 to 99) : 0 to 99 makes 100 numbers. if you do the calcul you have an exponential function
base^numberOfDigits:
10^2 = 100 ;
2^9 = 512
There's an easier way to think about this. Start with 1 bit. This can obviously represent 2 values (0 or 1). What happens when we add a bit? We can now represent twice as many values: the values we could represent before with a 0 appended and the values we could represent before with a 1 appended.
So the the number of values we can represent with n bits is just 2^n (2 to the power n)
The thing you are missing is which encoding scheme is being used. There are different ways to encode binary numbers. Look into signed number representations. For 9 bits, the ranges and the amount of numbers that can be represented will differ depending on the system used.

What column type should I use to store values between 0 and 1 (say up to 5 decimal places) in MySQL?

It is most important that it be accurate, but also it should take the least disk space possible.
You would need a DECIMAL(6,5) to store a number from 0 to 1 with 5 decimal places.
The declaration syntax for a DECIMAL column is DECIMAL(M,D). The ranges of values for the arguments in MySQL 5.1 are as follows:
M is the maximum number of digits (the precision). It has a range of 1 to 65.
D is the number of digits to the right of the decimal point (the scale). It has a range of 0 to 30 and must be no larger than M.
According to this, in MySQL 5.0.3
DECIMAL(5,5) or DECIMAL (6,6) should take 3 bytes.
DECIMAL(4,4) 2 bytes.
If you need to store values from 0 to 1 inclusive, you might be tempted to use DECIMAL(6,5). But that occupies 4 bytes as integer and float parts are stored separately and you need one byte for integer and three for 5 decimal digits. And if you have 4 bytes you might as well use FLOAT.
Before MySql 5 DECIMALs were stored as strings and the most efficient way was to store SMALLINT or MEDIUMINT (2 or 3 bytes) and manually divide it by 10000 or 1000000 respectively.