Horners Scheme Binary to Decimal Conversion - binary

Using Horner's scheme, represent (evaluate) the binary unsigned whole number 11001101 in decimal. I got 410 I just want to make sure its right.

11001101
Horner's Scheme:
((((((((1*2 + 1)2 + 0)2 + 0)2 + 1)2 + 1)2 + 0)2 + 1
1*2=2+1=3*2=6+0=6*2=12+0=12*2=24+1=25*2=50+1=51*2=102+0=102*2=204+1=205
11001101 (binary) = 205 (decimal)
NOTE: You do not multiply the last 1 because the 2 exponent is actually 0 there, and 2^0=1.

Related

2's Complement - Another Interpretation. Issues?

I've been browsing around the internet looking to validate my interpretation of 2's Complement.
Everywhere I look (including my educational class) I see the following:
Invert all bits in the # (1->0 and simultaneously 0->1, i.e. swap all 1s and 0s for all bits)
Add 1 to the now-inverted #.
However, my interpretation is to not do any of that but rather think of the MSB (Most Significant Bit, the Bit with the highest place value) as the same place value but multiplied by -1, then add the rest of the bits normally (typical place value addition).
The maximum # and minimum # achieved by my interpretation is the exact same as with 2's complement (as shown by first 3 examples).
E.g.
0000 = 0(-23) + 0(22)+0(21)+0(20) = 0
1000 = 1(-23) + 0(22)+0(21)+0(20) = -8
0111 = 0(-23) + 1(22)+1(21)+1(20) = +7
1111 = 1(-23) + 1(22)+1(21)+1(20) = -8+7 = -1 (example for signed 4-bit #s)
...........................................................................................................................................................................................................
1111 1111 = -27 + 26+25+24 + 23+22+21+20 = -128 + 64+32+16+8+4+2+1
= -128+(27-1) = -128+127 = -1 (example for signed 8-bit #s)
...........................................................................................................................................................................................................
1111 1111 1111 1111 = -215 + 214+213+212+211+210+29+28+27+26+25+24+23+22+21+20
= -32768 + 16384+8192+4096 + 2048+1024+512+256 + 128+64+32+16 + 8+4+2+1
= -32768+(215-1) = -32768 + 32767 = -1 (example for signed 16-bit #s)
...........................................................................................................................................................................................................
1111 1111 1111 1111 1111 1111 1111 1111 =
=-231+230+229+228+227+226+225+224+223+222+221+220+219+218+217+216+215+214+213+212+211+210+29+28+27+26+25+24+23+22+21+20
= -2,147,483,648 + ... + 32768+16384+8192+4096 + 2048+1024+512+256 + 128+64+32+16 + 8+4+2+1
= -2,147,483,648 + (2,147,483,648-1) = -1 (example for signed 32-bit #s)
...........................................................................................................................................................................................................
My interpretation appears to work perfectly.
At least to me, this interpretation is much easier to understand, but why haven't I ever seen anybody else use it? Is the interpretation flawed in some non obvious way?
Finally, is this the way 2's complement was invented with another method being taught instead?

A negative floating number to binary

So the exercise says: "Consider binary encoding of real numbers on 16 bits. Fill the empty points of the binary encoding of the number -0.625 knowing that "1110" stands for the exposant and is minus one "-1"
_ 1110_ _ _ _ _ _ _ _ _ _ _ "
I can't find the answer and I know this is not a hard exercise (at least it doesn't look like a hard one).
Let's ignore the sign for now, and decompose the value 0.625 into (negative) powers of 2:
0.625(dec) = 5 * 0.125 = 5 * 1/8 = 0.101(bin) * 2^0
This should be normalized (value shifted left until there is a one before the decimal point, and exponent adjusted accordingly), so it becomes
0.625(dec) = 1.01(bin) * 2^-1 (or 1.25 * 0.5)
With hidden bit
Assuming you have a hidden bit scenario (meaning that, for normalized values, the top bit is always 1, so it is not stored), this becomes .01 filled up on the right with zero bits, so you get
sign = 1 -- 1 bit
exponent = 1110 -- 4 bits
significand = 0100 0000 000 -- 11 bits
So the bits are:
1 1110 01000000000
Grouped differently:
1111 0010 0000 0000(bin) or F200(hex)
Without hidden bit (i.e. top bit stored)
If there is no hidden bit scenario, it becomes
1 1110 10100000000
or
1111 0101 0000 0000(bin) = F500(hex)
First of all you need to understand that each number "z" can be represented by
z = m * b^e
m = Mantissa, b = bias, e = exponent
So -0.625 could be represented as:
-0.625 * 10^ 0
-6,25 * 10^-1
-62,5 * 10^-2
-0,0625 * 10^ 1
With the IEEE conversion we aim for the normalized floating point number which means there is only one preceding number before the comma (-6,25 * 10^-1)
In binary the single number before the comma will always be a 1, so this number will not be stored.
You're converting into a 16 bit float so you have:
1 Bit sign 5 Bits Exponent 10 Bits mantissa == 16Bits
Since the exponent can be negative and positive (as you've seen above this depends only on the comma shifting) they came up with the so called bias. For 5 bits the bias value is 01 111 == 15(dez) with 14 beeing ^-1 and 16 beeing ^1 ...
Ok enough small talk lets convert your number as an example to show the process of conversion:
Convert the pre-decimal position to binary as always
Multiply the decimal place by 2 if the result is greater 1, subtract 1 and notate 1 if it's smaller 0 notate 0.
Proceed this step until the result is == 0 or you've notated as many numbers as your mantissa has
shift the comma to only one pre-decimal and count the shiftings. if you shifted to the left add the count to the bias if you have to shift to the right subtract the count from the bias. This is your exponent
Dertmine your sign and add all parts together
-0.625
1. 0 to binary == 0
2. 0.625 * 2 = 1.25 ==> -1
0.25 * 2 = 0.5 ==> 0
0.5 * 2 = 1 ==> -1
Abort
3. The intermediary result therefore is -0.101
shift the comma 1 times to the right for a normalized floating point number:
-1.01
exponent = bias + (-1) == 15 - 1 == 14(dez) == 01110(bin)
4. put the parts together, sign = 1(negative), (and remember we do not store the leading 1 of number)
1 01110 01
since we aborted during our mantissa calculation fill the rest of the bits with 0:
1 01110 01 000 000 00
The IEEE 754 standard specifies a binary16 as having the following format:
Sign bit: 1 bit
Exponent width: 5 bits
Significand precision: 11 bits (10 explicitly stored)
Equation = exp(-1, signbit) x exp(2, exponent-15) x (1.significantbits)
Solution is as follows,
-0.625 = -1 x 0.5 x 1.25
significant bits = 25 = 11001
exponent = 14 = 01110
signbit = 1
ans = (1)(01110)(0000011001)

Trouble understanding an exercise given the two's complement in hex format to convert into decimal format

I am trying to convert the two's complement of the following hex values to their decimal values:
23, 57, 94 and 87.
a) 23
Procedure: (3 x 16^0) + (2 x 16^1) -> (3) + (32) = 35 (Correct)
b) 57
Procedure: (7 x 16^0) + (5 x 16^1) -> (7) + (80) = 87 (Correct)
For 94 and 87, the correct values are -108 & -121 respectively.
If I follow the procedure I used for numbers a) and b) I get 148 & 128 for 94 & 87.
Can someone explain to me how do I get to the correct results since mine are wrong? Do I need to convert the byte to binary first and then proceed from there?
Thanks a lot in advance!
0x94 = 0b10010100
now you can convert it to a decimal number like it is an normal binary number, except that the MSB counts negative:
1 * -2^7 + 0 * 2^6 + 0 * 2^5 + 1 * 2^4 + 0 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0 =
-2^7 + 2^4 + 2^2 =
-128 + 16 + 4 =
-108
the other number works similar
First write down the binary representation of the hex value:
94h = 10010100b
To take the two's complement, you flip all bits and add 00000001b, so the two's complement of this binary string is
01101011b + 00000001b = 01101100b
Then the first bit is interpreted as the sign (in this case minus), and the remaining 7 bits constitute the magnitude, so:
01101100b = -108d
The other works similarly.

How are Hex and Binary parsed?

HEX Article
By this I mean,
In a program if I write this:
1111
I mean 15. Likewise, if I write this:
0xF
I also mean 15. I am not entirely sure how the process is carried out through the chip, but I vagely recall something about flagging, but in essence the compiler will calculate
2^3 + 2^2 + 2^1 + 2^0 = 15
Will 0xF be converted to "1111" before this calculation or is it written that somehow HEX F represents 15?
Or simply, 16^0 ? -- which obviously is not 15.
I cannot find any helpful article that states a conversion from HEX to decimal rather than first converting to binary.
Binary(base2) is converted how I did above (2^n .. etc). How is HEX(base16) converted? Is there an associated system, like base 2, for base 16?
I found in an article:
0x1234 = 1 * 16^3 + 2 * 16^2 + 3 * 16^1 + 4 * 16^0 = 4660
But, what if the number was 0xF, does this represent F in the 0th bit? I cannot find a straightforward example.
There are sixteen hexadecimal digits, 0 through 9 and A through F.
Digits 0 through 9 are the same in hex and in decimal.
0xA is 1010.
0xB is 1110.
0xC is 1210.
0xD is 1310.
0xE is 1410.
0xF is 1510.
For example:
0xA34F = 10 * 163 + 3 * 162 + 4 * 161 + 15 * 160

division by 2 in Binary Signed digit (Redundant Binary representation)

How can I do division by 2 in Binary Signed digit (Redundant Binary representation) ? Shifting won't work right ?
A redundant binary representation is just an expression of the form:
\sum_{i=0}^n d_i 2^n
where the d_i's are drawn from a larger set than just {0,1}.
Dividing by two or shifting right takes that to
\sum_{i=0}^{n-1} d_{i+1} 2^n + f(d_0)
The trick comes in how to deal with adjusting for the redundant representation for d_0.
If your RBR has digits the form {0,1,2} and has a 2 for the least significant digit you will then have to add 1 to the result to compensate, so f(0) = 0, f(1) = 0, f(2) = 1 should work.
4 = 12_base2, so 12_base2 >> 1 = 1 + f(2) = 1 + 1 = 2_base2 = 2 as expected.
6 = 102_base2, so 102_base2 >> 1 = 10_base2 + f(2) = 11_base2 = 3
You can get something similar for signed redundant binary representations (i.e. with d_i in {-1,0,1}) by setting f(-1) = -1.
1 = 1(-1)_base2, so 1(-1)_base2 >> 1 = 1 + f(-1) = 1 - 1 = 0
So ultimately the naive approach of just shifting does work, you just need a fudge factor to account for any redundant encoding of the shifted digits.
If your chosen RBR includes more options, you'll need to adjust the fudge factor accordingly.