Can I present numbers from numbers from 0 to 255 in less than 8 bits? - binary

I know that it takes 8 bits to demonstrate a number like 255 in binary system. I'm desperately looking for a way of storing numbers from 0 to 255 (especially from 90 to 255) in less than 8 bits. Anything can be helpful, like coordinate systems, spirals, compressions, etc.
I need to store a number up to 255 in less than 8 bits (1 byte).

No, not all of them.
You can represent some of them in less than eight bits, and others in more than eight bits. Such a representation could result in an overall compression, if the frequency of occurrence of the byte values is heavily biased in the direction of the ones represented with fewer bits.
Your "especially from 90 to 255" sounds like a small bias. If you could assure that only those values are present, then they could be represented in a little less than 7.38 bits each, on average.

Related

Why some Large number (around 100 digit) can factor out very fast?

I am new to number theory, I am trying to prime factor large numbers in around 100 digit numbers.
like my program factor out a 93 digit number in 30min while a 116 digit number that took computer few days.
however, there is a 104 digit number i work on 13270693758489295980223043261833153409168505210538146384653262578584663296471619841442958585315929292397
the result come out instantly
I wonder why this number able to factor out so fast. what condition it must satisfy to able to factor out very fast and easy.
The 47-digit factor 15630142427492468388372081926250991439041076399 is smooth and easily found by Pollard's p − 1 method with B = 2000; the remaining cofactor is prime.
Some primes have lots of small factors so they get eliminated fast, then our new number is a lot smaller and that gets factored quickly so...

MySQL field type for weight and height data

What is the adequate MySQL field type for weight (in kilograms) and height (meters) data?
That all depends. There are advantages and disadvantages to the different numeric types:
If you absolutely need the height/weight to be exactly a certain number of decimal places and not subject to the issues that floating point numbers can cause, use decimal() (decimal takes two parameters, the amount number of digits > 1 and the number of digits < 1, so decimal(5,2) would be able to store numbers as low as 0.01 and as high as 999.99). Fixed precision math is a bit slower, and it's more costly to store them because of the algorithms involved in doing it. It's absolutely mandatory to store currency values or any value you intend to do math with involving currency as a numeric(). If you're, for instance, paying people per pound or per meter for a baby, it would need to be numeric().
If you're willing to accept the possibility that some numeric values will be less precise than others use either FLOAT or DOUBLE depending on the amount of precision and the size of the numbers you're storing. Floating points are approximations based on the way computers do math, we use decimal points to mean out of powers of 10, computers do math in base2, the two are not directly comparable, so to get around that algorithms were developed for cases in which 100% precision is not required, but more speed is necessary that cannot be achieved by modeling it using collections of integers.
For more info:
http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html
http://dev.mysql.com/doc/refman/5.0/en/fixed-point-types.html
http://dev.mysql.com/doc/refman/5.0/en/floating-point-types.html

how does a computer work out if a value is greater than?

I understand basic binary logic and how to do basic addition, subtraction etc. I get that each of the characters in this text is just a binary number representing a number in a charset. The numbers dont really mean anything to the computer. I'm confused however as to how a computer works out that a number is greater than another. what does it do at the bit level?
If you have two numbers, you can compare each bit, from most significant to least significant, using a 1-bit comparator gate:
Of course n-bit comparator gates exist and are described further here.
It subtracts one from the other and sees if the result is less than 0 (by checking the highest-order bit, which is 1 on a number less than 0 since computers use 2's complement notation).
http://academic.evergreen.edu/projects/biophysics/technotes/program/2s_comp.htm
It substracts the two numbers and checks if the result is positive, negative (highest bit - aka "the minus bit" is set), or zero.
Within the processor, often there will be microcode to do operations, using hardwired options, such as add/subtract, that is already there.
So, to do a comparison of an integer the microcode can just do a subtraction, and based on the result determine if one is greater than the other.
Microcode is basically just low-level programs that will be called by assembly, to make it look like there are more commands than is actually hardwired on the processor.
You may find this useful:
http://www.osdata.com/topic/language/asm/intarith.htm
I guess it does a bitwise comparison of two numbers from the most significant bit to the least significant bit, and when they differ, the number with the bit set to "1" is the greater.
In a Big-endian architecture, the comparison of the following Bytes:
A: 0010 1101
B: 0010 1010
would result in A being greatest than B for its 6th bit (from the left) is set to one, while the precedent bits are equal to B.
But this is just a quick theoretic answer, with no concerns about floating point numbers and negative numbers.

Why is it useful to count the number of bits?

I've seen the numerous questions about counting the number of set bits in an insert type of input, but why is it useful?
For those looking for algorithms about bit counting, look here:
Counting common bits in a sequence of unsigned longs
Fastest way to count number of bit transitions in an unsigned int
How to count the number of set bits in a 32-bit integer?
You can regard a string of bits as a set, with a 1 representing membership of the set for the corresponding element. The bit count therefore gives you the population count of the set.
Practical applications include compression, cryptography and error-correcting codes. See e.g. wikipedia.org/wiki/Hamming_weight and wikipedia.org/wiki/Hamming_distance.
If you're rolling your own parity scheme, you might want to count the number of bits. (In general, of course, I'd rather use somebody else's.) If you're emulating an old computer and want to keep track of how fast it would have run on the original, some had multiplication instructions whose speed varied with the number of 1 bits.
I can't think of any time I've wanted to do it over the past ten years or so, so I suspect this is more of a programming exercise than a practical need.
In an ironic sort of fashion, it's useful for an interview question because it requires some detailed low-level thinking and doesn't seem to be taught as a standard algorithm in comp sci courses.
Some people like to use bitmaps to indicate presence/absence of "stuff".
There's a simple hack to isolate the least-significant 1 bit in a word, convert it to a field of ones in the bits below it, and then you can find the bit number by counting the 1-bits.
countbits((x XOR (x-1)))-1;
Watch it work.
Let x = 00101100
Then x-1 = 00101011
x XOR x-1 = 00000111
Which has 3 bits set, so bit 2 was the least-significant 1-bit in the original word

MySQL Bit Data Type Storage Space

i am studying the mysql certification guide. at the Bit Data Type section, it says
a BIT(4) sores 4 bit per value
and that storage requirement for a BIT(n) column is (n+7)/8. i dont understand this part. shldnt a BIT(4) take up just 4 bits of storage?
Actually it's a clumsy way to round up the result. What it means is BIT(1) to BIT(8) take 1 byte, BIT(9) to BIT(16) take 2 bytes, etc... There is no 7 bits overhead. Divide the number of bits by 8 and round up the result. BIT(4) will take 1 byte.
It seems there is an overhead of 7 bits - probably identifying a block of memory as BIT storage.
This 7 bits is added to the number requestedby BIT(n) and the total is divided by 8 to give the number of bytes. The manual defines the (n+7)/8 as BYTES
So 4 bits requires less than 2 bytes. The manual says 'approximately' because it depends whether you talk about whole bytes or fractions.