I have to convert 1357AC.EF from hex to binary. I'm a little confused on what to do. Since it's a has a decimal, do I convert it from hex to decimal doing (1x16^5)+(3x16^4)+(5x16^3)+(7x16^2)+(10x16^1)+12+(14x16^-1)+(15x16^-2) and then convert that to binary by dividing that by 2 and finding the remainder? Or am I making this too hard for myself?
Just convert each individual hexadigit to its four-bit binary equivalent.
1357AC.EF would be 0001 0011 0101 0111 1010 1100 . 1110 1111
You are making it too hard one hex character is 4 bits e.g. 'A' = '1100'
So just iterate through the string get a character, look up the binary in a hash or an array, and then concatenate it with the earlier result.
Related
in binary how do you differ between numbers and letters? I believe that positive numbers begin with 0000 and negative with 1000, then add the next 4 digits so -5 would be 1000 0101.
I know capital/lowercase letters start with 0100 and 0110 so just wondering if I was right about the number thing.
Also, if you could can tell me how to do decimals or special symbols that would be great,
Thanks - Jon
Binary is just a representation of a value. It's true that for signed values, if MSB is 1 the value is negative and if the MSB is 0 the value is positive. However the second part of your statement is not correct. 1000 0101 is not -5, it's actually -123. To represent -5 you take the value 5, which is 0000 0101, invert all bits, and add one, giving you 1111 1011. This is called two's complement.
Your next statement
I know capital/lowercase letters start with 0100 and 0110
May not necessarily be true. It depends on the character encoding. In ASCII, for example, uppercase Latin letters A-Z range from 65 to 90, which can be represented in binary as 0100 0001 through 0101 1010, and the lowercase letters a-z are from 97 to 122, which is represented in binary as 0110 0001 through 0111 1010.
Also, if you could can tell me how to do decimals or special symbols that would be great
Again, depends on the encoding. If we're talking about ASCII, a decimal is 46, which in binary is 0010 1110.
Here's a table with all 8-bit ASCII characters: http://ascii-code.com/
If you want other characters beyond ASCII, you'll need to look into Unicode.
I am in a CSCI class and we are just learning about program execution. I am running a program called "Brookshear machine simulator" which was written by the author of the class text book ( Computer Science 11th edition by J. Glenn Brookshear). The program is intended to add the contents of 11 and 0F, storing the result into F1. I have done everything necessary and produced the hex value in 11 which is 09. I am then asked to convert this into two's complement 8-bit binary, which is where I am having a problem. I will be needing to convert some hex values into two's compl 8-bit binary in the future for this lab but I cant figure out how to do it. Can someone please help me understand what twos comp is and how is it related or the same as 8-bit binary , so I can convert this to two's complement 8-bit binary?
Here is a picture of the machine simulator with the inputs as directed by the lab instructions. My task is to find the hex value in 11 (09) then convert it to twos complement 8-bit binary.
Each hexadecimal digit has a 4 bit binary equivalent:
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
A 1010
B 1011
C 1100
D 1101
E 1110
F 1111
So if you have a two character hex value, like 09 then you can see that 0 = 0000 and 9 = 1001, so that would be:
00001001
which is an 8 bit value.
This works for any length of hex number of course, so for example 37FF in hex would be 0011011111111111 in binary.
Note that two's complement is irrelevant for your example as the number is positive.
I would like to understand how I can calculate manually CRC encoding.
I have message to be sent like 1110 1101 1011 0111 and code generator 11001. In order to encode the message I add five zeros to the message (1110 1101 1011 0111 00000) and divide it by generator 11001.
I should receive 1011000000100100 with reminder 0000100 - in such a case I can replace five zeros with right part of reminder (00100). This is what I can see in the example found somewhere.
But I cannot calculate it with Windows calculator (calc.exe). I launch programmer's mode in calc.exe, type 1110 1101 1011 0111 00000 XOR 110001 and receive 111011011011011010001 instead of 1011000000100100. (Ordinary division gives 1001101100111110 which is not correct value as well).
How can I perform XOR division (or rather obtain reminder from this division) on two binary numbers?
Best regards!
XOR by itself isn't a division. The calculator tool is just doing a single bit-wise exclusive-or of the two binary numbers (with the shorter number assumed to be padded with leading zeros as necessary) and returning the result.
111011011011011100000 XOR
000000000000000110001 =
111011011011011010001
What you are looking for is an iterative shift-and-XOR process, which calc.exe doesn't do. If you want to do it manually, you're going to need a pencil and paper.
I have a 32-bit hex value, for example 04FA4FA4 and I want to know how to convert it to BAMS in the form of a double. An example in any language would work fine, I am only concerned about learning the algorithm or formula to make the conversion. I know how to convert to BAMS when I have a form like, 000:00.0000 but I can't figure out how to make the conversion from Hex.
This link is the easiest to understand resource I found. The algorithm is simple:
(decimal hex value) * 180 / 2^(n-1) //where n is the number of bits
The example in the reference is,
0000 0000 1010 0110
0 0 A 6
166 * 180 * 2^−15 = 0.9118 degrees
The code for this algorithm is so simple, I don't think I need to enumerate it here. Let me know, if someone feels this is incorrect.
Just wondering on how I would go about converting binary to hexadecimal??
Would I first have to convert the binary to decimal and then to hexadecimal??
For example, 101101001.101110101010011
How would I go about converting a complex binary such as the above to hexadecimal?
Thanks in advance
Each 4 bits of a binary number represents a hexadecimal digit. So the best way to convert from binary to hexadecimal is to pad the binary number with leading zeroes so that the number of bits is divisible by four.
Then you process four bits at a time and convert them to a single hexadecimal digit:
0000 -> 0
0001 -> 1
0010 -> 2
....
1110 -> E
1111 -> F
No, you don't convert to decimal and then to hexadecimal, you convert to a numeric value, and then to hexadecimal.
(Decimal is also a textual representation of a number, just like binary and hexadecimal. Although decimal representation is used by default, a number doesn't have a textual representation in itself.)
As a hexadecimal digit corresponds to four binary digits you don't have to convert the entire string to a number, you can do it four binary digits at a time.
First fill up the binary number so that it has full groups of four digits:
000101101001.1011101010100110
Then you can convert each group to a number, and then to hexadecimal:
0001 0110 1001.1011 1010 1010 0110
169.BAA6
Alternatively, you can split the number into the two parts before and after the period and convert those from binary. The part before the period can be converted stright off, but the part after has to be padded to be correct.
Example in C#:
string binary = "101101001.101110101010011";
string[] parts = binary.Split('.');
while (parts[1].Length % 4 != 0) {
parts[1] += '0';
}
string result =
Convert.ToInt32(parts[0], 2).ToString("X") +
"." +
Convert.ToInt32(parts[1], 2).ToString("X");
You could simply have a small hash table, or other mapping converting each quadruplet of binary digits (as a string, assuming that's your input) into the corresponding hex digit (0 to 9, A to F) for the output string. You'll have to bunch the input bits up by 4, left-padding before the '.' and right-padding after it, with 0 in both cases, as needed.
So...:
locate the '.'
left of the '.', bunch by 4, left-padding the last bunch, going leftwards: in your example, 1001 leftmost, then 0110, finally 0001 (left-padding), that's it;
ditto to the right -- in your example 1011, then 1010, then 1010, finally 0110 (right-padding)
each bunch of 4 binary digits, via a hash or other form of hashing, turns into the hex digit to put in that place in the output string.
Want some pseudo-code for it, e.g., Python?
The simplest approach, especially if you already can convert from binary digits to internal numeric representation and from internal numeric representation to hexadecimal digits, is to go binary->internal->hex. I say internal and not decimal, because even though it may print as decimal, it is actually being stored internally in binary format. That said, it is possible to go straight from one to the other. This does not apply to your specific example, but in many cases when converting from binary to hex, you can go four digits at a time, and simply lookup the corresponding hex values in a table. There are all sorts of ways to convert.
BIN to HEX
Binary and hex are natively compatible. Just group 4 binary digits(bits) and substitute the corresponding HEX-digit.
More reference here:
http://en.wikipedia.org/wiki/Hexadecimal#Binary_conversion