In this code golf question, there is a python answer that encodes the lengths of all integers from 1 to 99 in english to a big number:
7886778663788677866389978897746775667552677566755267756675527886778663788677866355644553301220112001
To get the length of n, you just have to calculate 3 + (the_big_number / (10**n)) % 10. How does this work?
(the_big_number / (10^n)) % 10 pulls out the nth least significant digit of the big number, so the lengths are just stored starting with the length of "zero" (1+3=4) at the far right, and following over to the length of "ninety-nine" (7+3=10) at the far left.
The shortest English numbers are three letters ("one", "two", "six", "ten"), so each length is stored with an offset of three. The longest prior to 100 are 9 + 3 = 12 letters (e.g. "seventy-eight"), so each number can be stored as a single digit.
Starting from the right:
the first digit is how many letters are in "zero" minus 3
the second digit is how many letters are in "one", minus 3
the third digit...
...the 100th digit is how many letters are in "ninety nine" minus three.
Note that that the longest number "seventy seven" has only 12 letters, which conveniently fits in a single digit after subtracting 3.
Related
I encountered many times this problem of decimal in MySQL !
When i put this type: DECIMAL(10,8)
The maximum value allowed are: 99.99999999 !
It supposed to be: 9999999999.99999999 no ?
I want a maximum value of decimal with 8 digits after the point (.).
From the documentation:
The declaration syntax for a DECIMAL column is DECIMAL(M,D). The ranges of values for the arguments in MySQL 5.7 are as follows:
M is the maximum number of digits (the precision). It has a range of 1 to 65.
D is the number of digits to the right of the decimal point (the scale). It has a range of 0 to 30 and must be no larger than M.
The first value is not the number of digits to the left of the decimal point, but the total number of digits.
That's why the value 9999999999.99999999 with DECIMAL(10, 8) is not possible: it is 18 digits long.
A decimal is defined by two parameters - DECIMAL(M, D), where M is the total number of digits, and D is number of digits after the decimal point out of M. To properly represent the number 9999999999.99999999, you'd need to use DECIMAL(18, 8).
The way DECIMAL(x,y) specifiers work is x represents the total number of digits and y the number that come after the decimal place.
10,8 means NN.NNNNNNNN.
If you want more, you need to make your range larger accordingly.
The first number is the total of digits, and the second one is the number of decimal places.
For the number you request, try DECIMAL(18,8).
You define DECIMAL(total positions, total decimal) so remember: the total decimal use the positions of the total positions, if you want most numbers to left and to right, use FLOAT type.
Is this just a coincidence that hexadecimal 0xaaaaaaaa represents binary with even positions set as 1.
Similarly something as elegant as 0x55555555 represents binary with odd positions set as 1 ?
Binary representation of 5 is 0101. So 0X55555555 has 16 ones, 16 zeros and the ones,zeros take alternate positions. Similarly 0X33333333 has 16 ones, 16 zeros and 2 consecutive ones, 2 consecutive zeros alternate.
Nothing special about those numbers per se, other than the fact that their corresponding bit patterns are useful.
I think the key realization here is that it's super easy to come up with a compact hex number to represent any longer bit pattern (even easier if it's repeating), right off the top of your head.
Why? Because it's trivial to convert from hex-to-binary or binary-to-hex - every four bits of the pattern can be neatly represented by one hex digit:
So let's say I wanted this 16-bit mask: 1110111011101110. This is 1110 repeated 4 times, so it's just some hex digit, 4 times. Since 1110 is 14 in decimal, that's gonna be "E", so our mask would be: 0xEEEE.
How many bits are required to store three different values?
My guess is 8 times 3 = 24 bits.
I'm confused because I learned in class that a 8-bit byte can only hold a value between 0 and 255. Does this mean to store a value above 255 we need more than 8 bits?
A bit is either 0 or 1. So it can store 2 values.
Two bits can store 2*2 values, 4.
Three bits can store 2*2*2 values, 8.
And so on.
So to store 3 values, you need at least two bits.
You need more than 8 bits to store more than 256 values, yes, because 2^8 is 256.
TL;DR: two digits.
In any number base system, the number of values that can be held by a single digit is equal to the base number, so for the regular base-10 number system, that would be 10 (0 through 9). To hold higher numbers, you give yourself more digits: this would be the ones place, tens place, hundreds place, etc you learned in school.
Once you start giving yourself more digits, then it's just a matter of combinations: in a two-digit base-10 number, how many combinations are there? 00 through 99, so 100; with three digits, 000 through 999, so 1000.
The name bit is nothing more than the special name given to the digit in base-2 (otherwise known as binary). It can hold exactly two values, 0 and 1. To get more, you'd have to give yourself another digit, e.g., the ones place, the twos place. And even more digits: the fours place, the eights place, etc. Again, it's nothing but combinatory math. Two base-2 digits (or binary digits, or bits) can hold values 00 through 11, so 4; with three bits 00 through 111, so 8; with four bits 00 through 1111, so 16; with eight (a byte) 00000000 through 11111111, so 256.
So, in order to hold three values - e.g., 0 1 and 2 base-10 - you'd need two digits: 00 01 and 10 binary.
If you have a binary number say 1010 (which is 10 in base 10), is saying that dividing by two will remove the first digit (making it end up as 010), true?
Basically how do you remove the first digit (i.e. if the binary number is 0 or 1, then it will end up as nothing)? I don't want code or anything, I just want to know like something like you divide or multiply by two.
Also do not consider any of the left most zeroes, of a binary number.
It works the same way as it does in base ten. The number 401, without its first digit, is 1. You've subtracted 400, no? Now, to divide by ten, you would SHIFT the digits to the right. 401 shifted right is 040. 401/10 = 40. Note that the 1 is discarded because we're working with integer division.
So in binary, it's exactly the same, but with powers of 2. Removing the first bit does not DIVIDE by two. It SUBTRACTS the value of its position. So 101b (which is 4+1 = 5), without its largest bit, is 001b, or 1 decimal. It's subtraction: 5 - 4 = 1.
To divide by two, you shift the bits to the right, just like in base 10. So 101b would become 010b, which is 2 decimal. 5/2 == 2 (we're dropping the fractional part since it's integer division)
Make sense? If you're ever confused about binary, just think of how the digits & positions work in base ten, and instead of powers of ten, use powers of two.
If by "first digit" you mean "first significant digit", then what you're looking for is something like number and not (1 shl (int(log number / log 2))), where and and not are the bitwise operations, shl means shift left, and int is rounding down (never up) to integer. log is just a logarithm, in any base (same base for both cases).
If by "first digit" you mean the digit in some nth position (let the rightmost position be 0, counting to the left), then you just do number and not (1 shl position).
Removing a digit is like changing it to 0. Changing 1010 to 0010 is accomplished by subtracting 1000: 1010 - 1000 = 0010.
Right now I'm preparing for my AP Computer Science exam, and I need some help understanding how to convert between decimal, hexadecimal, and binary values by hand. The book that I'm using (Barron's) includes an example but does not explain it very well.
What are the formulas that one should use for conversion between these number types?
Are you happy that you understand number bases? If not, then you will need to read up on this or you'll just be blindly following some rules.
Plenty of books would spend a whole chapter or more on this...
Binary is base 2, Decimal is base 10, Hexadecimal is base 16.
So Binary uses digits 0 and 1, Decimal uses 0-9, Hexadecimal uses 0-9 and then we run out so we use A-F as well.
So the position of a decimal digit indicates units, tens, hundreds, thousands... these are the "powers of 10"
The position of a binary digit indicates units, 2s, 4s, 8s, 16s, 32s...the powers of 2
The position of hex digits indicates units, 16s, 256s...the powers of 16
For binary to decimal, add up each 1 multiplied by its 'power', so working from right to left:
1001 binary = 1*1 + 0*2 + 0*4 + 1*8 = 9 decimal
For binary to hex, you can either work it out the total number in decimal and then convert to hex, or you can convert each 4-bit sequence into a single hex digit:
1101 binary = 13 decimal = D hex
1111 0001 binary = F1 hex
For hex to binary, reverse the previous example - it's not too bad to do in your head because you just need to work out which of 8,4,2,1 you need to add up to get the desired value.
For decimal to binary, it's more of a long division problem - find the biggest power of 2 smaller than your input, set the corresponding binary bit to 1, and subtract that power of 2 from the original decimal number. Repeat until you have zero left.
E.g. for 87:
the highest power of two there is 1,2,4,8,16,32,64!
64 is 2^6 so we set the relevant bit to 1 in our result: 1000000
87 - 64 = 23
the next highest power of 2 smaller than 23 is 16, so set the bit: 1010000
repeat for 4,2,1
final result 1010111 binary
i.e. 64+16+4+2+1 = 87 in decimal
For hex to decimal, it's like binary to decimal, only you multiply by 1,16,256... instead of 1,2,4,8...
For decimal to hex, it's like decimal to binary, only you are looking for powers of 16, not 2. This is the hardest one to do manually.
This is a very fundamental question, whose detailed answer, on an entry level could very well be a couple of pages. Try to google it :-)