How are Hex and Binary parsed? - binary

HEX Article
By this I mean,
In a program if I write this:
1111
I mean 15. Likewise, if I write this:
0xF
I also mean 15. I am not entirely sure how the process is carried out through the chip, but I vagely recall something about flagging, but in essence the compiler will calculate
2^3 + 2^2 + 2^1 + 2^0 = 15
Will 0xF be converted to "1111" before this calculation or is it written that somehow HEX F represents 15?
Or simply, 16^0 ? -- which obviously is not 15.
I cannot find any helpful article that states a conversion from HEX to decimal rather than first converting to binary.
Binary(base2) is converted how I did above (2^n .. etc). How is HEX(base16) converted? Is there an associated system, like base 2, for base 16?
I found in an article:
0x1234 = 1 * 16^3 + 2 * 16^2 + 3 * 16^1 + 4 * 16^0 = 4660
But, what if the number was 0xF, does this represent F in the 0th bit? I cannot find a straightforward example.

There are sixteen hexadecimal digits, 0 through 9 and A through F.
Digits 0 through 9 are the same in hex and in decimal.
0xA is 1010.
0xB is 1110.
0xC is 1210.
0xD is 1310.
0xE is 1410.
0xF is 1510.
For example:
0xA34F = 10 * 163 + 3 * 162 + 4 * 161 + 15 * 160

Related

A negative floating number to binary

So the exercise says: "Consider binary encoding of real numbers on 16 bits. Fill the empty points of the binary encoding of the number -0.625 knowing that "1110" stands for the exposant and is minus one "-1"
_ 1110_ _ _ _ _ _ _ _ _ _ _ "
I can't find the answer and I know this is not a hard exercise (at least it doesn't look like a hard one).
Let's ignore the sign for now, and decompose the value 0.625 into (negative) powers of 2:
0.625(dec) = 5 * 0.125 = 5 * 1/8 = 0.101(bin) * 2^0
This should be normalized (value shifted left until there is a one before the decimal point, and exponent adjusted accordingly), so it becomes
0.625(dec) = 1.01(bin) * 2^-1 (or 1.25 * 0.5)
With hidden bit
Assuming you have a hidden bit scenario (meaning that, for normalized values, the top bit is always 1, so it is not stored), this becomes .01 filled up on the right with zero bits, so you get
sign = 1 -- 1 bit
exponent = 1110 -- 4 bits
significand = 0100 0000 000 -- 11 bits
So the bits are:
1 1110 01000000000
Grouped differently:
1111 0010 0000 0000(bin) or F200(hex)
Without hidden bit (i.e. top bit stored)
If there is no hidden bit scenario, it becomes
1 1110 10100000000
or
1111 0101 0000 0000(bin) = F500(hex)
First of all you need to understand that each number "z" can be represented by
z = m * b^e
m = Mantissa, b = bias, e = exponent
So -0.625 could be represented as:
-0.625 * 10^ 0
-6,25 * 10^-1
-62,5 * 10^-2
-0,0625 * 10^ 1
With the IEEE conversion we aim for the normalized floating point number which means there is only one preceding number before the comma (-6,25 * 10^-1)
In binary the single number before the comma will always be a 1, so this number will not be stored.
You're converting into a 16 bit float so you have:
1 Bit sign 5 Bits Exponent 10 Bits mantissa == 16Bits
Since the exponent can be negative and positive (as you've seen above this depends only on the comma shifting) they came up with the so called bias. For 5 bits the bias value is 01 111 == 15(dez) with 14 beeing ^-1 and 16 beeing ^1 ...
Ok enough small talk lets convert your number as an example to show the process of conversion:
Convert the pre-decimal position to binary as always
Multiply the decimal place by 2 if the result is greater 1, subtract 1 and notate 1 if it's smaller 0 notate 0.
Proceed this step until the result is == 0 or you've notated as many numbers as your mantissa has
shift the comma to only one pre-decimal and count the shiftings. if you shifted to the left add the count to the bias if you have to shift to the right subtract the count from the bias. This is your exponent
Dertmine your sign and add all parts together
-0.625
1. 0 to binary == 0
2. 0.625 * 2 = 1.25 ==> -1
0.25 * 2 = 0.5 ==> 0
0.5 * 2 = 1 ==> -1
Abort
3. The intermediary result therefore is -0.101
shift the comma 1 times to the right for a normalized floating point number:
-1.01
exponent = bias + (-1) == 15 - 1 == 14(dez) == 01110(bin)
4. put the parts together, sign = 1(negative), (and remember we do not store the leading 1 of number)
1 01110 01
since we aborted during our mantissa calculation fill the rest of the bits with 0:
1 01110 01 000 000 00
The IEEE 754 standard specifies a binary16 as having the following format:
Sign bit: 1 bit
Exponent width: 5 bits
Significand precision: 11 bits (10 explicitly stored)
Equation = exp(-1, signbit) x exp(2, exponent-15) x (1.significantbits)
Solution is as follows,
-0.625 = -1 x 0.5 x 1.25
significant bits = 25 = 11001
exponent = 14 = 01110
signbit = 1
ans = (1)(01110)(0000011001)

How to add hex numbers using two's complement?

I'm taking a beginner Computer Science course at my local college and one of the parts of this assignment asks me to convert a hex number to its hex equivalent. We use an online basic computer to do this that takes specific inputs specific inputs.
So according to my Appendix, when I type in a certain code it is supposed to "add the bit patterns [ED] and [09] as though they were two's complement representations." When I type the code into the system, it gives an output of F6... but I have no idea how it got there.
I understand how adding in two's complement works and I understand how to add two normal hex numbers, but when I add 09 (which is supposed to be the hex version of two's complement 9) and ED (which is supposed to be the hex version of two's complement -19), I get 10 if adding in two's complement or 162 if adding in hex.
Okay, you're just confusing yourself. Stop converting. This is all in hexadecimal:
ED
+ 09
----
D + 9 = 16 // keep the 6 and carry the 1
1
ED
+ 09
----
6
1 + E = F
ED
+ 09
----
F6
Regarding the first step, using 0x to denote hex numbers so we don't get lost:
0xD = 13,
0x9 = 9,
13 + 9 = 22,
22 = 0x16
therefore
0xD + 0x9 = 0x16
Gotta run, but just one more quick edit before I go.
D + 1 = E
D + 2 = F
D + 3 = 10 (remember, this is hex, so this is not "ten")
D + 4 = 11
...
D + 9 = 16

Trouble understanding an exercise given the two's complement in hex format to convert into decimal format

I am trying to convert the two's complement of the following hex values to their decimal values:
23, 57, 94 and 87.
a) 23
Procedure: (3 x 16^0) + (2 x 16^1) -> (3) + (32) = 35 (Correct)
b) 57
Procedure: (7 x 16^0) + (5 x 16^1) -> (7) + (80) = 87 (Correct)
For 94 and 87, the correct values are -108 & -121 respectively.
If I follow the procedure I used for numbers a) and b) I get 148 & 128 for 94 & 87.
Can someone explain to me how do I get to the correct results since mine are wrong? Do I need to convert the byte to binary first and then proceed from there?
Thanks a lot in advance!
0x94 = 0b10010100
now you can convert it to a decimal number like it is an normal binary number, except that the MSB counts negative:
1 * -2^7 + 0 * 2^6 + 0 * 2^5 + 1 * 2^4 + 0 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0 =
-2^7 + 2^4 + 2^2 =
-128 + 16 + 4 =
-108
the other number works similar
First write down the binary representation of the hex value:
94h = 10010100b
To take the two's complement, you flip all bits and add 00000001b, so the two's complement of this binary string is
01101011b + 00000001b = 01101100b
Then the first bit is interpreted as the sign (in this case minus), and the remaining 7 bits constitute the magnitude, so:
01101100b = -108d
The other works similarly.

Add 25 & 30 as binary number

Using 8 bit registers and signed magnitude representation.
I thought 25 in BCD is 010 0101 but my text book says it as 001 1001. Can somebody explain?
25 / 2 = 12r1 (12 with a remainder of 1)
12 / 2 = 6r0 (6 with a remainder of 0)
6 / 2 = 3r0 (3 with a remainder of 0)
3 / 2 = 1r1 (1 with a remainder of 0)
1 / 2 = 0r1 (0 with a remainder of 0)
So 11001 (working backward up the tree) is the binary equivalent to 25.
Another way to think about it is with powers of 2:
(1*16) + (1*8) + (0*4) + (0*2) + (1*1) = 25
And it's worth noting, just as in base 10, leading zeros do not change the value of a number. (00025 == 25) (0011001 == 11001).
The leading zeros are there in your case because your needing to populate an 8 bit register (there needs to be 8 binary digits regardless of their value).

Convert decimal number to excel-header-like number

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
0 = A
1 = B
...
25 = Z
26 = AA
27 = AB
...
701 = ZZ
702 = AAA
I cannot think of any solution that does not involve loop-bruteforce :-(
I expect a function/program, that accepts a decimal number and returns a string as a result.
Haskell, 78 57 50 43 chars
o=map(['A'..'Z']:)$[]:o
e=(!!)$o>>=sequence
Other entries aren't counting the driver, which adds another 40 chars:
main=interact$unlines.map(e.read).lines
A new approach, using a lazy, infinite list, and the power of Monads! And besides, using sequence makes me :), using infinite lists makes me :o
If you look carefully the excel representation is like base 26 number but not exactly same as base 26.
In Excel conversion Z + 1 = AA while in base-26 Z + 1 = BA
The algorithm is almost same as decimal to base-26 conversion with just once change.
In base-26, we do a recursive call by passing it the quotient, but here we pass it quotient-1:
function decimalToExcel(num)
// base condition of recursion.
if num < 26
print 'A' + num
else
quotient = num / 26;
reminder = num % 26;
// recursive calls.
decimalToExcel(quotient - 1);
decimalToExcel(reminder);
end-if
end-function
Java Implementation
Python, 44 chars
Oh c'mon, we can do better than lengths of 100+ :
X=lambda n:~n and X(n/26-1)+chr(65+n%26)or''
Testing:
>>> for i in 0, 1, 25, 26, 27, 700, 701, 702:
... print i,'=',X(i)
...
0 = A
1 = B
25 = Z
26 = AA
27 = AB
700 = ZY
701 = ZZ
702 = AAA
Since I am not sure what base you're converting from and what base you want (your title suggests one and your question the opposite), I'll cover both.
Algorithm for converting ZZ to 701
First recognize that we have a number encoded in base 26, where the "digits" are A..Z. Set a counter a to zero and start reading the number at the rightmost (least significant digit). Progressing from right to left, read each number and convert its "digit" to a decimal number. Multiply this by 26a and add this to the result. Increment a and process the next digit.
Algorithm for converting 701 to ZZ
We simply factor the number into powers of 26, much like we do when converting to binary. Simply take num%26, convert it to A..Z "digits" and append to the converted number (assuming it's a string), then integer-divide your number. Repeat until num is zero. After this, reverse the converted number string to have the most significant bit first.
Edit: As you point out, once two-digit numbers are reached we actually have base 27 for all non-least-significant bits. Simply apply the same algorithms here, incrementing any "constants" by one. Should work, but I haven't tried it myself.
Re-edit: For the ZZ->701 case, don't increment the base exponent. Do however keep in mind that A no longer is 0 (but 1) and so forth.
Explanation of why this is not a base 26 conversion
Let's start by looking at the real base 26 positional system. (Rather, look as base 4 since it's less numbers). The following is true (assuming A = 0):
A = AA = A * 4^1 + A * 4^0 = 0 * 4^1 + 0 * 4^0 = 0
B = AB = A * 4^1 + B * 4^0 = 0 * 4^1 + 1 * 4^0 = 1
C = AC = A * 4^1 + C * 4^0 = 0 * 4^1 + 2 * 4^0 = 2
D = AD = A * 4^1 + D * 4^0 = 0 * 4^1 + 3 * 4^0 = 3
BA = B * 4^0 + A * 4^0 = 1 * 4^1 + 0 * 4^0 = 4
And so forth... notice that AA is 0 rather than 4 as it would be in Excel notation. Hence, Excel notation is not base 26.
In Excel VBA ... the obvious choice :)
Sub a()
For Each O In Range("A1:AA1")
k = O.Address()
Debug.Print Mid(k, 2, Len(k) - 3); "="; O.Column - 1
Next
End Sub
Or for getting the column number in the first row of the WorkSheet (which make more sense, since we are in Excel ...)
Sub a()
For Each O In Range("A1:AA1")
O.Value = O.Column - 1
Next
End Sub
Or better yet: 56 chars
Sub a()
Set O = Range("A1:AA1")
O.Formula = "=Column()"
End Sub
Scala: 63 chars
def c(n:Int):String=(if(n<26)""else c(n/26-1))+(65+n%26).toChar
Prolog, 109 123 bytes
Convert from decimal number to Excel string:
c(D,E):- d(D,X),atom_codes(E,X).
d(D,[E]):-D<26,E is D+65,!.
d(D,[O|M]):-N is D//27,d(N,M),O is 65+D rem 26.
That code does not work for c(27, N), which yields N='BB'
This one works fine:
c(D,E):-c(D,26,[],X),atom_codes(E,X).
c(D,B,T,M):-(D<B->M-S=[O|T]-B;(S=26,N is D//S,c(N,27,[O|T],M))),O is 91-S+D rem B,!.
Tests:
?- c(0, N).
N = 'A'.
?- c(27, N).
N = 'AB'.
?- c(701, N).
N = 'ZZ'.
?- c(702, N).
N = 'AAA'
Converts from Excel string to decimal number (87 bytes):
x(E,D):-x(E,0,D).
x([C],X,N):-N is X+C-65,!.
x([C|T],X,N):-Y is (X+C-64)*26,x(T,Y,N).
F# : 166 137
let rec c x = if x < 26 then [(char) ((int 'A') + x)] else List.append (c (x/26-1)) (c (x%26))
let s x = new string (c x |> List.toArray)
PHP: At least 59 and 33 characters.
<?for($a=NUM+1;$a>=1;$a=$a/26)$c=chr(--$a%26+65).$c;echo$c;
Or the shortest version:
<?for($a=A;$i++<NUM;++$a);echo$a;
Using the following formula, you can figure out the last character in the string:
transform(int num)
return (char)num + 47; // Transform int to ascii alphabetic char. 47 might not be right.
char lastChar(int num)
{
return transform(num % 26);
}
Using this, we can make a recursive function (I don't think its brute force).
string getExcelHeader(int decimal)
{
if (decimal > 26)
return getExcelHeader(decimal / 26) + transform(decimal % 26);
else
return transform(decimal);
}
Or.. something like that. I'm really tired, maybe I should stop answering questions and go to bed :P