Binary representation of a natural number - binary

To get a binary representation from a natural number like 20, we divide this number by 2 and so on until we cannot divide by 2 anymore. To get a binary representation from a decimal number like 0.4512, we multiply this number by 2 repeated times.
What is the logic explanation why with these two systems we get a binary representation?
Thanks

It is based on the fact that numbers are coded in binary.
If the number A is an integer, A is rewritten as A=∑i=0n-1ai×2i=an-1×2n-1+an-2×2n-2+...+a1×2+a0
where ai=0 or 1.
It is easy to see that is A is even, a0=0, and if it is odd, a0=1. So we already have the least significant bit a0.
Now, if we divide A by two, a0 disappears and we have
A/2=an-1×2n-2+an-2×2n-3+...+a2×2+a1
We can determine this way a1 depending on the parity of A/2. and we continue, we get all the bits of A.
Fractional numbers are expressed according to negative powers of 2. If A=0.a-1a-2...a-n, A=a-1/2+a-2/4+...+a-n/2^n
If we multiply it by two, 2×A=a-1+a-2/2+...+a-n/2^n-1. If 2×A≥1, we must have a-1=1, otherwise a-1=0. And we can determine other bits is a similar way by successive multiplications by two.

Related

Representing numbers with IEEE754 with Round to Nearest Even

I'm currently learning about IEEE754 standard and rounding, and I have an exercise which is the following:
Add -325.875 to 0.546875 in IEEE754, but with 3 bits dedicated to the mantissa instead of 23.
I'm having a lot of trouble doing this, especially representing the intermediary values, and the guard/round/sticky bits. Can someone give me a step-by-step solution, to the problem?
My biggest problem is that obviously I can't represent 0.546875 as 0.100011 as that would have more precision than the system has. So how would that be represented?
Apologies if the wording is confusing.
Preliminaries
The preferred term for the fraction portion of a floating-point number is “significand,” not “mantissa.” “Mantissa” is an old word for the fraction portion of a logarithm. Mantissas are logarithmic; adding to the mantissa multiplies the number represented. Significands are linear; adding to the significand adds to the number represented (as scaled by the exponent).
When working with a significand, use its mathematical precision, not the number of bits in the storage format. The IEEE-754 binary32 format has 23 bits in its primary field for the encoding of a significand, but another bit is encoded via the exponent field. Mathematically, numbers in the binary32 format behave as if they have 24 bits in their significands.
So, the task is to work with numbers with four bits in their significands, not three.
Work
In binary, −325.875 is −101000101.1112•2. In scientific notation, that is −1.010001011112•28. Rounding it to four bits in the significand gives −1.0102•28.
In binary, 0.546875 is .1000112. In scientific notation, that is 1.000112•2−1. Rounding it to four bits in the significand gives 1.0012•2−1. Note that the first four bits are 1000, but they are immediately followed by 11, so we round up. 1.00011 is closer to 1.001 than it is to 1.000.
So, in a floating-point format with four-bit significands, we want to add −1.0102•28 and 1.0012•2−1. If we adjust the latter number to have the same exponent as the former, we have −1.0102•28 and 0.0000000010012•28. To add those, we note the signs are different, so we want to subtract the magnitudes. It may help to line up the digits as we were taught in elementary school:
1.010000000000
0.000000001001
——————————————
1.001111110111
Thus, the mathematical result would be −1.0011111101112•28. However, we need to round the significand to four bits. The first four bits are 1001, but they are followed by 11, so we round up, producing 1010. So the final result is −1.0102•28.
−1.0102•28 is −1.25•28 = −320.

Negative fixed point number representation

I am writing a generic routine for converting fixed-point numbers between decimal and binary representations.
For positive numbers the processing is simple, however when things come to negative ones I found divergent sources. Someone says there is a single bit used to hold the sign while others say the whole number should be represented in a pseudo integer using 2's complement even it is negative.
Please anyone tell me which source is correct or is there a standard representation for signed fixed point numbers?
Additionally, if the 2's complement representation was correct then how to represent negative numbers with zero integer part. For example -0.125?
Fixed-point numbers are just binary values where the place values have been changed. Assigning place values to the bits is an arbitrary human activity, and we can do it in any way that makes sense. Normally we talk about binary integers so it is convenient to assign the place value 2^0 = 1 to the LSB, 2^1=2 to the bit to the left of the LSB, and so on. For an N bit integer the place value of the MSB becomes 2^(N-1). If we want a two's-complement representation, we change the place value of the MSB to -2^(N-1) and all of the other bit place values are unchanged.
For fixed-point values, if we want F bits to represent a fractional part of the number, then the place value of the LSB becomes 2^(0-F)
and the place value of the MSB becomes 2^(N-1-F) for unsigned numbers and -2^(N-1-F) for signed numbers.
So, how would we represent -0.125 in a two's-complement fixed-point value? That is equal to 0.875 - 1, so we can use a representation where the place value of the MSB is -1 and the value of all of the other bits adds up to 0.875. If you choose a
4-bit fixed-point number with 3 fraction bits you would say that
1111 binary equals -0.125 decimal. Adding up the place values of the bits we have (-1) + 0.5 + 0.25 + 0.125 = -0.125. My personal preference is to write the binary number as 1.111 to note which bits are fraction and which are integer.
The reason we use this approach is that the normal integer arithmetic operators still work.
It's easiest to think of fixed-point numbers as scaled integers — rather than shifted integers. For a given fixed-point type, there is a fixed scale which is a power of two (or ten). To convert from the real value to the integer representation, multiply by that scale. To convert back again, simply divide. Then the issue of how negative values are represented becomes a detail of the integer type with which you are representing your number.
Please anyone tell me which source is correct...
Both are problematic.
Your first source is incorrect. The given example is not...
the same as 2's complement numbers.
In two’s complement, the MSB's (most significant bit's) weight is negated but the other bits still contribute positive values. Thus a two’s complement number with all bits set to 1 does not produce the minimum value.
Your second source could be a little misleading where it says...
shifting the bit pattern of a number to the right by 1 bit always divide the number by 2.
This statement brushes over the matter of underflow that occurs when the LSB (least significant bit) is set to 1, and the resultant rounding. Right-shifting commonly results in rounding towards negative infinity while division results in rounding towards zero (truncation). Both produce the same behavior for positive numbers: 3/2 == 1 and 3>>1 == 1. For negative numbers, they are contrary: -3/2 == -1 but -3>>1 == -2.
...is there a standard representation for signed fixed point numbers?
I don't think so. There are language-specific standards, e.g. ISO/IEC TR 18037 (draft). But the convention of scaling integers to approximate real numbers of predetermined range and resolution is well established. How the underlying integers are represented is another matter.
Additionally, if the 2's complement representation was correct then how to represent negative numbers with zero integer part. For example -0.125?
That depends on the format of your integer and your choice of radix. Assuming a 16-bit two’s complement number representing binary fixed-point values, the scaling factor is 2^15 which is 32,768. Multiply the value to store as an integer: -0.125*32768. == -4096 and divide to retrieve it: -4096/32768. == -0.125.

Binary numbers addition

I have just started doing some binary number exercices to prepare for a class that i will start next month and i got the hang of all the conversion from decimal to binary and viceverca But now with the two letters 'a ' ' b' in this exercise i am not sure how can i apply that knowledge to add the bits with the following exercise
Given two Binary numbers a = (a7a6 ... a0) and b = (b7b6 ... b0).There is a clculator that can add 4-bit binary numbers.How many bits will be used to represent the result of a 4-bit addition? Why?
We would like to use our calculator to calculate a + b. For this we can put as many as eight bits (4 bits of the first and 4 bits of the second number) of our choice in the calculator and continue to use the result bit by bit
How many additions does our calculator have to carry out for the addition of a and b at most? How many bits is the result maximum long?
How many additions does the calculator have to perform at least For the result to be correct for all possible inputs a and b?
The number of bits needed to represent a 4-bit binary addition is 5. This is because there could be a carry-over bit that pushes the result to 5 bits.
For example 1111 + 0010 = 10010.
This can be done the same way as adding decimal numbers. From right to left just add the numbers of the same significance. If the two bits are 1+1, the result is 10 so that place becomes a zero and the 1 carries over to the next pair of bits, just like decimal addition.
With regard to the min/max number of step, these seems more like an algorithm specific question. Look up some different binary addition algorithms, like ripple-carry for instance, and it should give you a better idea of what is meant by the question.

How to convert mantissa from decimal to binary efficiently

I am aware of the textbook method where we multiply the mantissa by 2 , take its integer part as the next bit, multiply the fractional part by 2 and repeat, until we get zero or reach our desired precision.
Is there an efficient algorithm to convert mantissa from base 10 to base 2 than the above mentioned algorithm?
The algorithm you proposed runs in O(n) time where n is the number of bits desired. We cannot do better than this, because we have to somehow calculate all n desired bits of the output, so any algorithm must be at least O(n), or else the output could not possibly contain all the desired information.

Given the following, how do I find the number of normalized floating-point numbers and why?

I'm trying to understand how floating point number arithmetic plays a role in computer science when using the binary system. I came across an excerpt from What Every Computer Scientist Should Know About Floating-Point Arithmetic which defines normalized numbers as unique floating-point numbers with the leading significand being non-zero. It goes on to say...
When β
= 2, p = 3, e min = -1 and e max = 2 there are 16 normalized floating-point numbers, as shown in Figure D-1.
Where β is the base, p is the precision, e min is the minimum exponent, and e max is the maximum exponent.
My attempt at understanding how he came to the conclusion of there being 16 normalized floating-point numbers was to multiply together the possible number of significands β^p and the possible number of exponents e max - e min + 1. My result was 32 possible normalized floating-point values. I am unsure of how to get the correct result of 16 normalized floating-point values as was declared in the paper above. I assumed negative floating-point values were excluded, however, I did not include them in my calculations.
This question is more geared toward mathematical formulae. But it will help me to better understand how floating-point arithmetic works in computer science.
I would like to know how to get the correct result of 16 normalized floating-point numbers and why.
Since the first bit is always 1, with 3 bits for the mantissa you have only two bits to vary, yielding 4 different mantissa values. Combined with 4 different exponent values that's 16. I haven't looked at the paper though.
My attempt at understanding how he came to the conclusion of there being 16 normalized floating-point numbers was to multiply together the possible number of significands β^p and the possible number of exponents e max - e min + 1
This is correct except that the number of possible significands is not βp in binary with an implicit leading 1. In these conditions, the number of possible significands is βp-1, encoded over p-1 bits.
In other words, the missing values for the possible significands have already been taken advantage of when the encoding reserved, say, 52 bits to encode a precision of 53 binary digits.