What's significant about this? (or where do you see this) ? 2^n-1 - equation

I know this could be a vague question (or not!).
I've seen this somewhere 2^n-1 (or 2^n+1). Where do you see this equation? and why is it significant? And when do you use it?

2^n-1 is the highest unsigned integer of n bits.
It's also a number easily tested for primeness, Mersenne prime http://en.wikipedia.org/wiki/Mersenne_prime
It's also the combination on my suitcase.
What's the point question?

John Smith answered the most common use of it. 2^n-1 is the largest unsigned integer you can store with n bits.
8 bits: 255
16 bits: 65535
32 bits: 4294967295
Oh, and mersenne primes as Beemer pointed out (link from his page).

It's also the maximum number of nodes in a balanced binary tree of height n.

Related

Minimum number of bits required for a range of numbers

Im trying to work out an answer for a question about meassuring pressures.
The meassurments are supposed to be stored in binary floating point format and my task is to determine the minimum number of bits required to do so with some constraints;
Maximum pressure is 1e+07 Pa
Minimum pressure is 10 Pa
Accuracy of meassurments is 0.001 %
So if I understand it correctly, I could happen to measssure
1e+07 + 0.00001 * 1e+07 = 10000100 Pa
and would want to store it precisely. This would mean I would need 24 bits, since
2^23< 10000100 <2^24-1.
Is this including the 1 bit for a negative sign? Since we don't meassure negative pressures, would a more accurate answer be 23 bits?
And also for the minimum pressure, I could happen to meassure 9.9999 Pa and would want to store this correctly, so 4 decimals.
Since I could do the same type of calculation and end up with
2^13<9999<2^14-1
Is this already covered in the 23-24 bits I chose first?
I'm very new to this so any help or just clarification would be appreciated.
Unless you are asking this question for (i.) academic interest or (ii.) are really short on storage - neither of which I am going to address - I would strongly advocate that you don't worry about the numbers of bits and instead use a standard float (4 bytes) or double (8 bytes). Databases and software packages understand floats and doubles. If you try to craft your own floating point format using the minimum number of bits then you are adding a lot of work for yourself.
A float will hold 7 significant digits, whereas a double will hold 16. If you want to get technical a float (single precision) uses 8 bits for the exponent, 1 bit for the sign and the remaining 23 bits for the significand, whilst a double (double precision) uses 11 bits for the exponent, 1 bit for the sign and 52 bits for the significand.
So I suggest you focus instead on whether a float or a double best meets your needs.
I know this doesn't answer your question, but I hope it addresses what you need.

How to know if it is either a positive/negative number or it is referring to a number in binary?

I'm learning Integer data formats in a computer science book and as far as I understand that binary representation of a integer whether it is positive or negative is to have the leftmost bit (msb) be either a 0 for positive or 1 for negative, lets say in a 8-bit computer how would I know if it is talking about 10000010 - 130 in base 10 or if it is referring to negative 2?
I might be wrong, if i'm please correct me.
If you were to just see the string 10000010 somewhere, I don't know... written on a wall or something, how would you know how to interpret it?
You might say, hey, that's ten million and ten ( you thought it was base 10 ) or you might say hey, that's -126 ( you thought it was two's complement binary ), or you might say that's positive 130 ( you thought it was standard binary ).
It is, in a theoretical sense, up to whatever is doing the interpreting how it is interpreted.
So, when a computer is holding 8 bits of data, it's up to it how it interprets it.
Now if you're programming, you can tell the computer how you want something interpreted. For example, in c++
// char is 1 byte
unsigned char x = 130u;
Here I have told the compiler to put 130 unsigned into a byte, so the computer will store 10000010 and will interpret it as the value 130
Now consider
// char is 1 byte
char x = -126;
Here I have told the compiler to put -126 signed into a byte, so the computer will again store 10000010 but this time it will interpret it as the value -126.
Take a look at the answer posted to this question: Negative numbers are stored as 2's complement in memory, how does the CPU know if it's negative or positive?
The CPU uses something called an opcode in order to determine which function it will take when manipulating a memory location (in this case, the value 10000010). It is that function within the CPU that will either manipulate it as a negative or a positive number. The CPU doesn't have access to whether or not the number is signed or unsigned - it uses the op code when manipulating that number to determine whether or not it should be a signed or unsigned operation.

what would indicate an overflow?

Im doing this question and some clarification would be super helpful. What exactly would an overflow entail? If when converting to decimal notation an extra bit would be needed? Fro part 3 "consider the bits as two's complement numbers" does he mean find the 2's complement? Thanks a bunch.
For number 3 he does not mean find the 2's complement. He is telling you to treat the values as signed numbers using 2's complement notation. That would mean the first value in a) is positive and the other three are negative.
For overflow it is different for 2 and 3. For 2, unsigned numbers, overflow occurs if there is a carry out of the high bit. For 3, 2's complement signed numbers, overflow occurs if the sign of the result is not correct. For example, if you add two positive numbers and the result is negative, there was overflow.
If you add x and y and get a result that is less than x or less than y, then the addition has overflowed (wrapped-around).
An overflow would be if the resulting sum is a larger number than can be expressed in an 8 bit system. I believe that would be any number greater than 255 (1 << 8).
Your assumption "an extra bit" is mostly correct. In an 8 bit system, all numbers are stored in 8 bits. Any operation that results in a number greater than the maximum that can be represented will be an overflow. This doesn't happen when you convert to decimal, but when you actually perform the sum with the binary values. If all numbers are 8 bits, you can't just add an additional bit when you need to store a larger number.
Yes, "two's complement" is the same as "2's complement". I'm not aware of any distinction between whether you spell it out or use the numeral.

negative integers in binary

5 (decimal) in binary 00000101
-5 (two's complement) in binary 11111011
but 11111011 is also 251 (decimal)!
How does computer discern one from another??
How does it know whether it's -5 or 251??
it's THE SAME 11111011
Thanks in advance!!
Signed bytes have a maximum of 127.
Unsigned bytes cannot be negative.
The compiler knows whether the variable holding that value is of signed or unsigend type, and treats it appropriately.
If your program chooses to treat the byte as signed, the run-time system decides whether the byte is to be considered positive or negative according to the high-order bit. A 1 in that high-order bit (bit 7, counting from the low-order bit 0) means the number is negative; a 0 in that bit position means the number is positive. So, in the case of 11111011, bit 7 is set to 1 and the number is treated, accordingly, as negative.
Because the sign bit takes up one bit position, the absolute magnitude of the number can range from 0 to 127, as was said before.
If your program chooses to treat the byte as unsigned, on the other hand, what would have been the sign bit is included in the magnitude, which can then range from 0 to 255.
Two's complement is designed to allow signed numbers to be added/substracted to one another in the same way unsigned numbers are. So there are only two cases where the signed-ness of numbers affect the computer at low level.
when there are overflows
when you are performing operations on mixed: one signed, one unsigned
Different processors take different tacks for this. WRT orverflows, the MIPS RISC architecture, for example, deals with overflows using traps. See http://en.wikipedia.org/wiki/MIPS_architecture#MIPS_I_instruction_formats
To the best of my knowledge, mixing signed and unsigned needs to avoided at a program level.
If you're asking "how does the program know how to interpret the value" - in general it's because you've told the compiler the "type" of the variable you assigned the value to. The program doesn't actually care if 00000101 as "5 decimal", it just has an unsigned integer with value 00000101 that it can perform operations legal for unsigned integers upon, and will behave in a given manner if you try to compare with or cast to a different "type" of variable.
At the end of the day everything in programming comes down to binary - all data (strings, numbers, images, sounds etc etc) and the compiled code just ends up as a large binary blob.

Are "65k" and "65KB" the same?

Are "65k" and "65KB" the same?
From xkcd:
65KB normally means 66560 bytes. 65k means 65000, and says nothing about what it is 65000 of. If someone says 65k bytes, they might means 65KB...but they're mispeaking if so. Some people argue for the use of KiB to mean 66560 bytes, since k means 1000 in the metric system. Everyone ignores them, though.
Note: a lowercase b would mean bit, rather than bytes. 8Kb = 1KB. When talking about transmission rates, bits are usually used.
Edit: As Joel mentions, hard drive manufacturers often treat the K as meaning 1000. So hard disk space of 65KB would often mean 65000. Thumb drives and the like tend to use K as meaning 1024, though.
Probably.
Technically 65k just means 65 thousand (monkeys perhaps?). You would have to take into account the context.
65kB can be interpreted to mean either 65 * 1000 = 65,000 bytes or 60 * 2^10 = 66,560 bytes.
You can read about all this and kibibytes at Wikipedia.
65k is 65,000 of something
65KB is 66,560 bytes (65*1024)
Like most have said, 65KB is 66560, 65k is 65000. 65KB means 66560 BYTES, and 65k is ambiguous. So they're not the same.
Additionally, since there are a few people equating "8 bits = 1 byte", I thought I'd add a little bit about that.
Transmission rates are usually in bits per second, because the grouping into bytes might not be directly related to the actual transmission clock rate.
Take for instance 9600 baud with RS232 serial ports. There are always exactly 9600 bits going out per second (+/- maybe a 5% clock tolerance). However, if those bits are grouped as N-8-1, meaning "no parity, 8 bits, 1 stop bit", then there are 10 bits per byte and so the byte rate is 960 bytes/second maximum. However, if you have something odd like E-8-2, or "even parity, 8 bits, 2 stop bits" then it's 12 bits per byte, or 800 bytes/second. The actual bits are going out at exactly the same rate, so it only makes sense to talk about the bits/second rate.
So 1 byte might be 8 bits, 9 bits (ie parity), 10 bits (ie N81,E71,N72), 11 bits(ie E81), 12 bits (ie E82), or whatever. There are lots of combinations of ways with just RS232-style transmission to get very odd byte rates. If you throw in RS or ECC correction, you could have even more bits per byte. Then there's 8b/10b, 6b/8b, hamming codes, etc...
In terms of data transfer rates - 65k implies 65 kilobits and 65KB implies 65 KiloBytes
Check this http://en.wikipedia.org/wiki/Data_rate_units
cheers
From Wikipedia for Kilobyte:
It is abbreviated in a number of ways: KB, kB, K and Kbyte.
In other words, they could both be abbreviations for Kilobyte. However, using only a lowercase 'k' is not a standard abbreviation, but most people will know what you mean.
There you go:
kB = kiloByte
KB = KelvinByte
kb = kilobit
Kb = Kelvinbit
Use the bold ones! But be aware that some people use 1024 instead of 1000 for k (kilo).
My opinion on this: kilo = 1000. So the first one who decided to use 1024 made the mistake. If I am not mistaken 1024 was used first by IT engineers. Later they found out (probably some marketing genius) that they can label things using 1000 as kilo and make things look bigger than they actualy are. Since then, you can't be sure which value is used for kilo.
In general, yes, they're both 65 kilobytes (66,560 bytes).
Sometimes the abbreviations are tricky with casing. If it had been "65Kb", it would have correctly meant kilo***bits***.
A kilobyte (KB) is 1024 bytes.
Kilo stands for 1000.
So, going purely by notation: (65k = 65,000) != (65KB = 66,560).
However, if you're talking about memory you're probably always going to see KB (even if its written as k).
Generally, KB = k. It's all very confusing really.
Strictly speaking, the former is not specifying the unit: 65,000 What? So, the two can't really be compared.
However, in general speech then most people mean 65K (note it's normally uppercase) to mean 65 KiloBytes (or 65 * 1024 Bytes).
Note 65Kb usually denotes KiloBits.
"Officially", 65k is 65,000; however people say 65k all the time, even if the real number is something like 65,123.
Typically 65k means anywhere from 64.00001 to 65.99999998 KiB or sometimes anywhere between 63500 and 64999 bytes ... ie, we aren't all the precise most of the time with sizes of things. When someone cares to be precise, they will be explicit, or the meaning will be clear from context.
65 KiB means 65 * 1024 bytes. .... unless the person was rounding. Never trust a number unless you measure it yourself! ... :)
Hope that helps,
--- Dave
65k may be the same as 65KB, but remember, 65KB is larger than 65Kb.
Case is important, as are units.
Psto, you're right. This is an absolute minefield!
As many said, K is tecnically Kilo, meaning Thousand (of anything) and comes from greek.
But you can assume different units depending on the context.
As data transfer rates are most often measured in bits, K in this context could be assumed to be Kilo Bits.
When talking about data storage, a file's size, etc. K can be assumed to be Kilo Bytes.