How do I add two hexadecimal values? - mips

this is a school assignment. I've been given homework and one of the problems is to figure out the value after adding two hexadecimal values.
0x80000000 and 0xD0000000. I understand that D = 13, but I don't understand how the answer is 15, because 8 + 13 = 23? Could someone explain what I am doing wrong, and what I should do instead?
Thanks!

It's easy if you think that every digit represents a quadruple, for example
0xDEADBEEF = 13*16⁷+14*16⁶+10*16⁵+13*16⁴+11*16³+14*16²+14*16¹+15*16⁰.
The above hexadecimal value needs an algorithm to translate to a format the the ALU can add, for instance binary numbers.
D is 13 in decimal because D is digit number 13 if A replaces 10 and so on (0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F). The position of D is 7, so the number is 13*16⁷.
We notice that it is easier to LSB if we do this, and recognize that F is 15 in hexadecimal notation and therefore the number to the left will be 15*16⁰=15.
0xFF therefore means 15*16¹+15*16⁰=15*16+15=255 (you can check this with a calculator).
Now the algorithm is hopefully clear.
3735928559 is the decimal value of DEADBEEF because ==13*16^7+14*16^6+10*16^5+13*16^4+11*16^3+14*16^2+14*16^1+15*16^0=3735928559.

Some times I convert the hexadecimal into binary base 2 this because I feel more confident to do arithmetic with binary base 2than hexadecimal.
In order to do so you need to arrange every hexadecimal into group of 4 bits binary number.
hex 0x8 + 0xD
Convert to binary
binary 1000 + 1101 = 10101 = 21
group it again as 4 bits
0001 0101 = 0x15
I ignored if it's signed number and didn't used two's complement.

Related

Convert from format with 5 exponent bits to format with 4 exponent bits

Consider the following two 9-bit floating-point representations based on the IEEE floating-point
format.
Format A:
There is 1 sign bit.
There are k = 5 exponent bits. The exponent bias is 15.
There are n = 3 fraction bits.
Format B:
There is 1 sign bit.
There are k = 4 exponent bits. The exponent bias is 7.
There are n = 4 fraction bits.
In the following table, you are given some bit patterns in format A, and your task is to convert
them to the closest value in format B. In
addition, give the values of numbers given by the format A and format B bit patterns
I'm currently stuck on 3 cases:
Format A
Value
Format B
Value
1 00111 010
-5/1024
0 00000 111
7/131072
1 11100 000
-8192
I am able to convert to decimal value for all 3 cases, but I am struggling to convert format B.
The first case if I change to format B, the exponent with bias will be -8 + bias = -8 + 7 = -1, so is it correct if I make the exponent all 0 (denormalized value)? And how will be the frac part?
The second case I think it is right to make the exp all 0 (denormalized value), but what is the correct frac part?
The last case, the exponent overflows (since 13 + 7 = 20 which exceeds 4-bit), so what should it be?
I really need to understand how this works, not only the answer. Thank you for any help!
The exponent field encodes an exponent. The code 0 means subnormal. The code 1 is the minimum normal exponent. With a bias of 7, the code of 1 encodes an exponent of 1−7 = −6. Therefore, the minimum exponent is −6. To encode a subnormal number, you need to adjust its exponent to be −6.
The value −5/1024 equals −1012•2−10. Shifting to make its exponent −6 gives −1012•2−10 = −0.01012•2−6. So the leading bit of the significand is 0 (confirming it is subnormal), and the trailing bits are 0101.
For 7/131,072, shifting to make the exponent −6 gives 1112•2−13 = 0.00001112•2−6. The significand does not fit into five bits (one leading plus four trailing), so this number cannot be represented in the format. Rounding to the nearest representable value gives 0.00012•2−6.
For −8192, shifting to make the exponent the largest representable value, 7, gives −12•213 = −10000002•27. So this number cannot be represented in the format. Rounding is implemented by choosing the rounding direction as if the exponent range were unbounded. So this should be rounded upward in magnitude (downward when the sign is conisdered). Rounding upward in magnitude from the largest finite value produces infinity. So this number is rounded to −∞, which is represented with the sign bit set, all ones in the exponent field, and all zeros in the primary significand field.

How computers convert decimal to binary integers

This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set

How to convert Binary to Decimal and Octal?

I have a BINARY number which i want to convert it into the DECIMAL and OCTAL.
(0100 1111 1011 0010)2
I know how to convert it into the decimal. But the question making me confuse. Because middle of every 4 digits there is a space "0101 1111"
can u help me how to understand this question.
Thanks
First of all, make sure that number you are converting into Decimal and Octal is actually 'Binary' and not 'Binary Coded Decimal (BCD)'. Usually when the number is grouped into 4 binary digits, it represents a BCD instead of just binary.
So, once you make sure its actually binary and not BCD, the conversion to both decimal and octal are simple steps.
For binary to octal, you group the binary number into sets of 3 digits, starting form the Least Significant Bit(LSB or right-most) to the Most Significant Bit(MSB or left-most). Add leading zeros if a group of 3 digits can not be formed at the MSB.
Now convert each group of digits from binary to octal:
(000) -> 0
(001) -> 1
.
.
(111) -> 7
Finally put the numbers together, and there you have your binary converted to octal.
Eg:-
binary - 00101101
split into groups of 2: -> 000 101 101 -> 0 5 5 - > 55
Difference between'Binary Coded Decimal' and 'Binary':
For the decimal number 1248
the binary would simply be 10011100000
However, the BCD would be -> 0001 0010 0100 1000
The spaces are not part of the number, it's just to make it easier for humans to read. Conversion from binary to octal is simple. Group the binary digits into sets of 3 (from right to left, add extra 0s to the leftmost group, then convert each group individually. Your example:
0100 1111 1011 0010 -> 100 111 110 110 010 -> 47662
The space is just for readability. Especially nice if you try to convert this to hex, because 4 binary digits make up one hex-digit.
Firstly, those spaces are for human readability; just delete them. Secondly, If this is not for a computer program, simply open up the windows calculator, go to view, and select programmer. Then chose the bin radio button and type in your number. the qword radio button should be selected. If it's for a program, I will need to know what language to help you.
To convert octal to decimal very quickly there are two methods. You can actually do the actual calculation in bitshift. In programming, you should do bitshift.
Example octal number = 147
Method one: From left to right.
Step 1: First digit is one. Take that times 8 plus 4. Got 12.
Step 2: Take 12 times 8 + 7. Got 103, and 103 is the answer.
Ultimately you can use method one to convert any base into base 10.
Method one is reading from left to right of the string. Make a result holder for calculation. When you read the first leftmost digit, you add that to a result value. Each time you read a new digit, you take the result value and multiply that by the base of the number(for octal, that would be 8), then you add the value of the new digit to the result.
Method 2, bitshift:
Octal Number: 147.
Step 1: 1 = 1(bin) = Shift << 3 = 1000(result value)
Step 2: 4 = 100(bin) + 1000(result value) = 1100(result value)
Step 3: 1100(result value) Shift << 3 = 1100000
Step 4: 7 = 111(bin) + 1100000(result value) = 1100111
Step 5: 1100111 binary is 103 decimal.
In a programming loop, you can do something like the below, and it is lightning fast. The code is so simple that it can be converted into any programming language. Note that there isn't any error checking.
for ( int i = 0; i < length; i++ ){
c = (str.charAt(i) ^ 48);
if ( c > 7 ) return 0; // <- if char ^ 48 > 7 then that is not a valid octal number.
out = (out << 3) + c;
}

Change 8-digit hexadecimal number(IEEE-754) single precision representation into a real number

I am changing an 8-digit hexadecimal number to a real number Y.
The number is 0xAC0396ED
My question is: Shall I take into consideration the 0x? What is the significance of the 0x?
I researched it and based on wikipedia, I got this:
"use the prefix 0x for numeric constants represented in hex"
What I plan on doing is take the part AC0396ED and change it to binary, then from binary manipulate the 32 bit number byte dividing the number intro 3 parts: sign, exponent, and fraction.
My last question is why do we need hexacdecimal, decimal, octal? Why don't we just stick to binary in all our arithmetic and operations?
Thank you.
My question is: Shall I take into consideration the 0x? What is the significance of the 0x?
No - you should not take that into consideration - its only a notation for hexadecimal base system
What I plan on doing is take the part AC0396ED and change it to binary, then from binary manipulate the 32 bit number byte dividing the number intro 3 parts: sign, exponent, and fraction.
Here is how you calculate it:
AC0396ED =>
10 12 0 3 9 6 14 13 in decimal. Then to binary
1010 1100 0000 0011 1001 0110 1110 1101
here you have all bits i.e 32 bits
1 | 01011000 | 00000111001011011101101
so the first bit is 1 => the number is negative
the second is the exponent
the third is the mantissa
My last question is why do we need hexacdecimal, decimal, octal? Why don't we just stick to binary in all our arithmetic and operations?
Its much more easy to stick to hexadecimal system - compare the 32 bits to the AC0396ED

Decimal/Hexadecimal/Binary Conversion

Right now I'm preparing for my AP Computer Science exam, and I need some help understanding how to convert between decimal, hexadecimal, and binary values by hand. The book that I'm using (Barron's) includes an example but does not explain it very well.
What are the formulas that one should use for conversion between these number types?
Are you happy that you understand number bases? If not, then you will need to read up on this or you'll just be blindly following some rules.
Plenty of books would spend a whole chapter or more on this...
Binary is base 2, Decimal is base 10, Hexadecimal is base 16.
So Binary uses digits 0 and 1, Decimal uses 0-9, Hexadecimal uses 0-9 and then we run out so we use A-F as well.
So the position of a decimal digit indicates units, tens, hundreds, thousands... these are the "powers of 10"
The position of a binary digit indicates units, 2s, 4s, 8s, 16s, 32s...the powers of 2
The position of hex digits indicates units, 16s, 256s...the powers of 16
For binary to decimal, add up each 1 multiplied by its 'power', so working from right to left:
1001 binary = 1*1 + 0*2 + 0*4 + 1*8 = 9 decimal
For binary to hex, you can either work it out the total number in decimal and then convert to hex, or you can convert each 4-bit sequence into a single hex digit:
1101 binary = 13 decimal = D hex
1111 0001 binary = F1 hex
For hex to binary, reverse the previous example - it's not too bad to do in your head because you just need to work out which of 8,4,2,1 you need to add up to get the desired value.
For decimal to binary, it's more of a long division problem - find the biggest power of 2 smaller than your input, set the corresponding binary bit to 1, and subtract that power of 2 from the original decimal number. Repeat until you have zero left.
E.g. for 87:
the highest power of two there is 1,2,4,8,16,32,64!
64 is 2^6 so we set the relevant bit to 1 in our result: 1000000
87 - 64 = 23
the next highest power of 2 smaller than 23 is 16, so set the bit: 1010000
repeat for 4,2,1
final result 1010111 binary
i.e. 64+16+4+2+1 = 87 in decimal
For hex to decimal, it's like binary to decimal, only you multiply by 1,16,256... instead of 1,2,4,8...
For decimal to hex, it's like decimal to binary, only you are looking for powers of 16, not 2. This is the hardest one to do manually.
This is a very fundamental question, whose detailed answer, on an entry level could very well be a couple of pages. Try to google it :-)