Hex to Binary Angle Measurement (BAMS) in any language - binary

I have a 32-bit hex value, for example 04FA4FA4 and I want to know how to convert it to BAMS in the form of a double. An example in any language would work fine, I am only concerned about learning the algorithm or formula to make the conversion. I know how to convert to BAMS when I have a form like, 000:00.0000 but I can't figure out how to make the conversion from Hex.

This link is the easiest to understand resource I found. The algorithm is simple:
(decimal hex value) * 180 / 2^(n-1) //where n is the number of bits
The example in the reference is,
0000 0000 1010 0110
0 0 A 6
166 * 180 * 2^−15 = 0.9118 degrees
The code for this algorithm is so simple, I don't think I need to enumerate it here. Let me know, if someone feels this is incorrect.

Related

How does a computer convert a bit sequence into a digit sequence?

I am making an n-bit integer datatype, so I'm not restricted to the standard 32 and 64 bit variants.
My binary calculations seem to be correct, I would now like to show them as base10 numbers.
Here is what I don't know: how does a computer now what specific characters to put on the console?
How does it know that 1111 1111 should be the sequence of 255 on the screen? What algorithm is used for this? I know how to convert 1111 1111 into 255 * mathematically *. But how do you do the same * visually *?
I don't really know how to even begin on doing this.

Number Systems-Hex vs Binary

Question with regards to the use of hexadecimal. Here's a statement I have a question about:
Hexadecimal is often the preferred means of representing data because it uses fewer digits than binary.
For example, the word vim can be represented by:
Hexadecimal: 76 69 6D
Decimal: 118 105 109
Binary: 01110110 01101001
01101101
Obviously, hex is shorter than binary in this example, however, wont the hex values eventually be converted to binary at the machine level so the end result for hex-binary is exactly the same?
This is a good question.
Yes, the hexadecimal values will be converted in binaries at machine level.
But you are looking the question from the machine point of view.
Hexadecimal notation was introduced because:
It's more easy to read and memorize than binaries for human. For example if you are reading memory addresses, you can observe that they are actually written in hexadecimal, that is far more simple to read then binary.
It's easy to do calculation from binaries to hexadecimal than other base (like
our today-used base 10). For example, it's easy to group binary digits into hex in your head (4 bits per hex digit).
I suggest you this article that gives some easy example calculations to better understand hexadecimal advantages.

Converting a two's complement in fix-point arithmetic (with bit numbering) to decimal number

I'm wondering how you could convert a two's complement in fix-point arithmetic to a decimal number.
So let's say we got this fix-point arithmetic in two's complement: 11001011 with bit numbering, with 2 positions behind the decimal point, and want form it to a decimal number.
We already know that the decimal will be negative because the first bit is a 1.
2 positions behind decimal point, so we have 110010 11.
Convert that from two's complement to normal form (sub by 1, invert):
110010 10 (i sub by 1 here)
001101 01 (i inverted here)
001101 in decimal is 13
01 in decimal is 1
So in the end we get to -13.1. Is that correct or there isn't even a way to convert this?
The simplest method is just to convert the whole value to an integer (ignoring the fixed point, initially), then scale the result.
So for your example where you have a 6.2 fixed point number: 110010 10:
Convert as integer:
11001010 = -54
Divide by scale factor = 2^2:
-54 / 4 = -13.5
Note that the fractional part is always unsigned. (You can probably see now that 10 would give you + 0.5 for the fractional part, i.e. 00 = 0.0, 01 = +0.25, 10 = +0.5, 11 = +0.75.)
A small note (but very important in understanding your question) - when you say "decimal point", do you really mean decimal? or do you mean "binary" point.
Meaning, if it's a decimal point, you can position it after you convert to decimal, to see how many decimal digits should remain to the right of the point, but if you mean binary point, it means in the binary representation how many bits represent the fraction part.
In your example, it seems that you meant binary point, and then the integer part is 001101(bin) = 13(dec) and the fraction part is 0.01(bin)=0.25(dec), because the first bit to the right of the point represents 1/2, the second represents 1/4 and so on, and the whole thing is then negated.
Total result then will be -13.25

Am I on the right track with my Hex to binary conversion?

I have to convert 1357AC.EF from hex to binary. I'm a little confused on what to do. Since it's a has a decimal, do I convert it from hex to decimal doing (1x16^5)+(3x16^4)+(5x16^3)+(7x16^2)+(10x16^1)+12+(14x16^-1)+(15x16^-2) and then convert that to binary by dividing that by 2 and finding the remainder? Or am I making this too hard for myself?
Just convert each individual hexadigit to its four-bit binary equivalent.
1357AC.EF would be 0001 0011 0101 0111 1010 1100 . 1110 1111
You are making it too hard one hex character is 4 bits e.g. 'A' = '1100'
So just iterate through the string get a character, look up the binary in a hash or an array, and then concatenate it with the earlier result.

How would I write this in IEEE standards?

I would like to know how to write 5/32 in IEEE754 standard. Is there a shortcut to do the fraction part?
The answer is 0 10000010 00100000000000000000000. But there has to be an easier way to write 5/32 into this format than converting it to binary first.
I found out that you can do this to get the decimal numbers a lot faster. 5 (base 10) = 101 (Binary) and 32 = 1/(2^5). So 5/32 is just 101x2^-5