Decimal/Hexadecimal/Binary Conversion - binary

Right now I'm preparing for my AP Computer Science exam, and I need some help understanding how to convert between decimal, hexadecimal, and binary values by hand. The book that I'm using (Barron's) includes an example but does not explain it very well.
What are the formulas that one should use for conversion between these number types?

Are you happy that you understand number bases? If not, then you will need to read up on this or you'll just be blindly following some rules.
Plenty of books would spend a whole chapter or more on this...
Binary is base 2, Decimal is base 10, Hexadecimal is base 16.
So Binary uses digits 0 and 1, Decimal uses 0-9, Hexadecimal uses 0-9 and then we run out so we use A-F as well.
So the position of a decimal digit indicates units, tens, hundreds, thousands... these are the "powers of 10"
The position of a binary digit indicates units, 2s, 4s, 8s, 16s, 32s...the powers of 2
The position of hex digits indicates units, 16s, 256s...the powers of 16
For binary to decimal, add up each 1 multiplied by its 'power', so working from right to left:
1001 binary = 1*1 + 0*2 + 0*4 + 1*8 = 9 decimal
For binary to hex, you can either work it out the total number in decimal and then convert to hex, or you can convert each 4-bit sequence into a single hex digit:
1101 binary = 13 decimal = D hex
1111 0001 binary = F1 hex
For hex to binary, reverse the previous example - it's not too bad to do in your head because you just need to work out which of 8,4,2,1 you need to add up to get the desired value.
For decimal to binary, it's more of a long division problem - find the biggest power of 2 smaller than your input, set the corresponding binary bit to 1, and subtract that power of 2 from the original decimal number. Repeat until you have zero left.
E.g. for 87:
the highest power of two there is 1,2,4,8,16,32,64!
64 is 2^6 so we set the relevant bit to 1 in our result: 1000000
87 - 64 = 23
the next highest power of 2 smaller than 23 is 16, so set the bit: 1010000
repeat for 4,2,1
final result 1010111 binary
i.e. 64+16+4+2+1 = 87 in decimal
For hex to decimal, it's like binary to decimal, only you multiply by 1,16,256... instead of 1,2,4,8...
For decimal to hex, it's like decimal to binary, only you are looking for powers of 16, not 2. This is the hardest one to do manually.

This is a very fundamental question, whose detailed answer, on an entry level could very well be a couple of pages. Try to google it :-)

Related

Convert from format with 5 exponent bits to format with 4 exponent bits

Consider the following two 9-bit floating-point representations based on the IEEE floating-point
format.
Format A:
There is 1 sign bit.
There are k = 5 exponent bits. The exponent bias is 15.
There are n = 3 fraction bits.
Format B:
There is 1 sign bit.
There are k = 4 exponent bits. The exponent bias is 7.
There are n = 4 fraction bits.
In the following table, you are given some bit patterns in format A, and your task is to convert
them to the closest value in format B. In
addition, give the values of numbers given by the format A and format B bit patterns
I'm currently stuck on 3 cases:
Format A
Value
Format B
Value
1 00111 010
-5/1024
0 00000 111
7/131072
1 11100 000
-8192
I am able to convert to decimal value for all 3 cases, but I am struggling to convert format B.
The first case if I change to format B, the exponent with bias will be -8 + bias = -8 + 7 = -1, so is it correct if I make the exponent all 0 (denormalized value)? And how will be the frac part?
The second case I think it is right to make the exp all 0 (denormalized value), but what is the correct frac part?
The last case, the exponent overflows (since 13 + 7 = 20 which exceeds 4-bit), so what should it be?
I really need to understand how this works, not only the answer. Thank you for any help!
The exponent field encodes an exponent. The code 0 means subnormal. The code 1 is the minimum normal exponent. With a bias of 7, the code of 1 encodes an exponent of 1−7 = −6. Therefore, the minimum exponent is −6. To encode a subnormal number, you need to adjust its exponent to be −6.
The value −5/1024 equals −1012•2−10. Shifting to make its exponent −6 gives −1012•2−10 = −0.01012•2−6. So the leading bit of the significand is 0 (confirming it is subnormal), and the trailing bits are 0101.
For 7/131,072, shifting to make the exponent −6 gives 1112•2−13 = 0.00001112•2−6. The significand does not fit into five bits (one leading plus four trailing), so this number cannot be represented in the format. Rounding to the nearest representable value gives 0.00012•2−6.
For −8192, shifting to make the exponent the largest representable value, 7, gives −12•213 = −10000002•27. So this number cannot be represented in the format. Rounding is implemented by choosing the rounding direction as if the exponent range were unbounded. So this should be rounded upward in magnitude (downward when the sign is conisdered). Rounding upward in magnitude from the largest finite value produces infinity. So this number is rounded to −∞, which is represented with the sign bit set, all ones in the exponent field, and all zeros in the primary significand field.

How computers convert decimal to binary integers

This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set

How do I add two hexadecimal values?

this is a school assignment. I've been given homework and one of the problems is to figure out the value after adding two hexadecimal values.
0x80000000 and 0xD0000000. I understand that D = 13, but I don't understand how the answer is 15, because 8 + 13 = 23? Could someone explain what I am doing wrong, and what I should do instead?
Thanks!
It's easy if you think that every digit represents a quadruple, for example
0xDEADBEEF = 13*16⁷+14*16⁶+10*16⁵+13*16⁴+11*16³+14*16²+14*16¹+15*16⁰.
The above hexadecimal value needs an algorithm to translate to a format the the ALU can add, for instance binary numbers.
D is 13 in decimal because D is digit number 13 if A replaces 10 and so on (0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F). The position of D is 7, so the number is 13*16⁷.
We notice that it is easier to LSB if we do this, and recognize that F is 15 in hexadecimal notation and therefore the number to the left will be 15*16⁰=15.
0xFF therefore means 15*16¹+15*16⁰=15*16+15=255 (you can check this with a calculator).
Now the algorithm is hopefully clear.
3735928559 is the decimal value of DEADBEEF because ==13*16^7+14*16^6+10*16^5+13*16^4+11*16^3+14*16^2+14*16^1+15*16^0=3735928559.
Some times I convert the hexadecimal into binary base 2 this because I feel more confident to do arithmetic with binary base 2than hexadecimal.
In order to do so you need to arrange every hexadecimal into group of 4 bits binary number.
hex 0x8 + 0xD
Convert to binary
binary 1000 + 1101 = 10101 = 21
group it again as 4 bits
0001 0101 = 0x15
I ignored if it's signed number and didn't used two's complement.

Converting a two's complement in fix-point arithmetic (with bit numbering) to decimal number

I'm wondering how you could convert a two's complement in fix-point arithmetic to a decimal number.
So let's say we got this fix-point arithmetic in two's complement: 11001011 with bit numbering, with 2 positions behind the decimal point, and want form it to a decimal number.
We already know that the decimal will be negative because the first bit is a 1.
2 positions behind decimal point, so we have 110010 11.
Convert that from two's complement to normal form (sub by 1, invert):
110010 10 (i sub by 1 here)
001101 01 (i inverted here)
001101 in decimal is 13
01 in decimal is 1
So in the end we get to -13.1. Is that correct or there isn't even a way to convert this?
The simplest method is just to convert the whole value to an integer (ignoring the fixed point, initially), then scale the result.
So for your example where you have a 6.2 fixed point number: 110010 10:
Convert as integer:
11001010 = -54
Divide by scale factor = 2^2:
-54 / 4 = -13.5
Note that the fractional part is always unsigned. (You can probably see now that 10 would give you + 0.5 for the fractional part, i.e. 00 = 0.0, 01 = +0.25, 10 = +0.5, 11 = +0.75.)
A small note (but very important in understanding your question) - when you say "decimal point", do you really mean decimal? or do you mean "binary" point.
Meaning, if it's a decimal point, you can position it after you convert to decimal, to see how many decimal digits should remain to the right of the point, but if you mean binary point, it means in the binary representation how many bits represent the fraction part.
In your example, it seems that you meant binary point, and then the integer part is 001101(bin) = 13(dec) and the fraction part is 0.01(bin)=0.25(dec), because the first bit to the right of the point represents 1/2, the second represents 1/4 and so on, and the whole thing is then negated.
Total result then will be -13.25

How to interpret an 8-bit hex value as unsigned decimal numbers?

This is a basic question and probably very easy but I am really confused.
I have an 8-bit hex number 0x9F and I need to interpret this number as an unsigned decimal number.
Do I just convert that to the binary form 1001 1111? and then the decimal number is 159?
I'm sorry if this question is trivial but my professor said I couldn't e-mail him questions and I don't know anyone in my class. He made it sound like when converting from hex to binary that it will be the 2's compliment. So I don't know if I need to convert it back to normal or not before converting to decimal.
We had a signed decimal number and converted it to binary, then took the 2's compliment and converted to hex. Is that only with signed numbers?
It's simply 9 * 16 + F where F is 15 (the letters A thru F stand for 10 thru 15). In other words, 0x9F is 159.
It's no different really to the number 314,159 being:
3 * 100,000 (10^5, "to the power of", not "xor")
+ 1 * 10,000 (10^4)
+ 4 * 1,000 (10^3)
+ 1 * 100 (10^2)
+ 5 * 10 (10^1)
+ 9 * 1 (10^0)
for decimal (base 10).
The signedness of such a number is sort of "one level up" from there. The unsigned value of 159 (in 8 bits) is indeed a negative number but only if you interpret it as one.