as the title says I'm curious to know how the checksum value is calculated, from what I've read it calculated using 2s complement. Below is a 2 lines from the hex file which was loaded onto my Microcontroller, I've added spaces to make it easier to read, S315 appears on every line, the address on line 1 is 080C0000 followed by 16 hex values which represent the bytes, the values AA on line 1 and AB on line 2 are I assume the checksum values.
For line 1 I've tried adding the following 15+08+0C+00+00+4D+53+53+70+6F+74+31+00+66+10+AE+19+7E+63+1F+78 which gives me 555 Hex or 010101010101 in binary. I've entered the binary value into an online 2s complement calculator but it always says "invalid binary"??
S3 15 080C0000 4D 53 53 70 6F 74 31 00 66 10 AE 19 7E 63 1F 78 AA
S3 15 080C0010 00 00 00 00 45 85 63 EB FF FF FF FF 04 00 03 00 AB
You add the byte values, like you've done. From that sum you take only the least significant byte.
Then for Motorola HEX (SREC):
Then you take the one's complement of that byte by inverting its bits (i.e. 1s turns to 0s and vice versa).
Then for Intel HEX:
Then you take the two's complement of that byte by inverting its bits (i.e. 1s turns to 0s and vice versa) and then you add 1.
Going by your example you have the sum 0x555. Then take the least significant byte, which is 0x55.
For Motorola HEX (SREC): Calculate the one's complement of that. You get 0xAA as the checksum.
For Intel HEX: Calculate the two's complement of that. You get 0xAB as the checksum.
I have to convert 1357AC.EF from hex to binary. I'm a little confused on what to do. Since it's a has a decimal, do I convert it from hex to decimal doing (1x16^5)+(3x16^4)+(5x16^3)+(7x16^2)+(10x16^1)+12+(14x16^-1)+(15x16^-2) and then convert that to binary by dividing that by 2 and finding the remainder? Or am I making this too hard for myself?
Just convert each individual hexadigit to its four-bit binary equivalent.
1357AC.EF would be 0001 0011 0101 0111 1010 1100 . 1110 1111
You are making it too hard one hex character is 4 bits e.g. 'A' = '1100'
So just iterate through the string get a character, look up the binary in a hash or an array, and then concatenate it with the earlier result.
in binary how do you differ between numbers and letters? I believe that positive numbers begin with 0000 and negative with 1000, then add the next 4 digits so -5 would be 1000 0101.
I know capital/lowercase letters start with 0100 and 0110 so just wondering if I was right about the number thing.
Also, if you could can tell me how to do decimals or special symbols that would be great,
Thanks - Jon
Binary is just a representation of a value. It's true that for signed values, if MSB is 1 the value is negative and if the MSB is 0 the value is positive. However the second part of your statement is not correct. 1000 0101 is not -5, it's actually -123. To represent -5 you take the value 5, which is 0000 0101, invert all bits, and add one, giving you 1111 1011. This is called two's complement.
Your next statement
I know capital/lowercase letters start with 0100 and 0110
May not necessarily be true. It depends on the character encoding. In ASCII, for example, uppercase Latin letters A-Z range from 65 to 90, which can be represented in binary as 0100 0001 through 0101 1010, and the lowercase letters a-z are from 97 to 122, which is represented in binary as 0110 0001 through 0111 1010.
Also, if you could can tell me how to do decimals or special symbols that would be great
Again, depends on the encoding. If we're talking about ASCII, a decimal is 46, which in binary is 0010 1110.
Here's a table with all 8-bit ASCII characters: http://ascii-code.com/
If you want other characters beyond ASCII, you'll need to look into Unicode.
I am in a CSCI class and we are just learning about program execution. I am running a program called "Brookshear machine simulator" which was written by the author of the class text book ( Computer Science 11th edition by J. Glenn Brookshear). The program is intended to add the contents of 11 and 0F, storing the result into F1. I have done everything necessary and produced the hex value in 11 which is 09. I am then asked to convert this into two's complement 8-bit binary, which is where I am having a problem. I will be needing to convert some hex values into two's compl 8-bit binary in the future for this lab but I cant figure out how to do it. Can someone please help me understand what twos comp is and how is it related or the same as 8-bit binary , so I can convert this to two's complement 8-bit binary?
Here is a picture of the machine simulator with the inputs as directed by the lab instructions. My task is to find the hex value in 11 (09) then convert it to twos complement 8-bit binary.
Each hexadecimal digit has a 4 bit binary equivalent:
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
A 1010
B 1011
C 1100
D 1101
E 1110
F 1111
So if you have a two character hex value, like 09 then you can see that 0 = 0000 and 9 = 1001, so that would be:
00001001
which is an 8 bit value.
This works for any length of hex number of course, so for example 37FF in hex would be 0011011111111111 in binary.
Note that two's complement is irrelevant for your example as the number is positive.
Out of curiosity, how exactly does binary code get converted into letters? I know there are sites that automatically convert binary to words for you but I wanna understand the specific, intermediary steps that binary code goes through before being converted into letters.
Here's a way to convert binary numbers to ASCII characters that is often simple enough to do in your head.
1 - Convert every 4 binary digits into one hex digit.
Here's a binary to hex conversion chart:
0001 = 1
0010 = 2
0011 = 3
0100 = 4
0101 = 5
0110 = 6
0111 = 7
1000 = 8
1001 = 9
1010 = a (the hex number a, not the letter a)
1011 = b
1100 = c
1101 = d
1110 = e
1111 = f
(The hexadecimal numbers a through f are the decimal numbers 10 through 15. That's what hexadecimal, or "base 16" is - instead of each digit being capable of representing 10 different numbers [0 - 9], like decimal or "base 10" does, each digit is instead capable of representing 16 different numbers [0 - f].)
Once you know that chart, converting any string of binary digits into a string of hex digits is simple.
For example,
01000100 = 0100 0100 = 44 hex
1010001001110011 = 1010 0010 0111 0011 = a273 hex
Simple enough, right? It is a simple matter to convert a binary number of any length into its hexadecimal equivalent.
(This works because hexadecimal is base 16 and binary is base 2 and 16 is the 4th power of 2, so it takes 4 binary digits to make 1 hex digit. 10, on the other hand, is not a power of 2, so we can't convert binary to decimal nearly as easily.)
2 - Split the string of hex digits into pairs.
When converting a number into ASCII, every 2 hex digits is a character. So break the hex string into sets of 2 digits.
You would split a hex number like 7340298b392 this into 6 pairs, like this:
7340298b392 = 07 34 02 98 b3 92
(Notice I prepended a 0, since I had an odd number of hex digits.)
That's 6 pairs of hex digits, so its going to be 6 letters. (Except I know right away that 98, b3 and 92 aren't letters. I'll explain why in a minute.)
3 - Convert each pair of hex digits into a decimal number.
Do this by multiplying the (decimal equivalent of the) left digit by 16, and adding the 2nd.
For example, b3 hex = 11*16 + 3, which is 110 + 66 + 3, which is 179.
(b hex is 11 decimal.)
4 - Convert the decimal numbers into ASCII characters.
Now, to get the ASCII letters for the decimal numbers, simply keep in mind that in ASCII, 65 is an uppercase 'A', and 97 is a lowercase 'a'.
So what letter is 68?
68 is the 4th letter of the alphabet in uppercase, right?
65 = A, 66 = B, 67 = C, 68 = D.
So 68 is 'D'.
You take the decimal number, subtract 64 for uppercase letters if the number is less than 97, or 96 for lowercase letters if the number is 97 or larger, and that's the number of the letter of the alphabet associated with that set of 2 hex digits.
Alternatively, if you're not afraid of a little bit of easy hex arithmetic, you can skip step 3, and just go straight from hex to ASCII, by remembering, for example, that
hex 41 = 'A'
hex 61 = 'a'
So subtract 40 hex for uppercase letters or 60 hex for lowercase letters, and convert what's left to decimal to get the alphabet letter number.
For example
01101100 = 6c, 6c - 60 = c = 12 decimal = 'l'
01010010 = 52, 52 - 40 = 12 hex = 18 decimal = 'R'
(When doing this, it's helpful to remember that 'm' (or 'M') is the 13 letter of the alphabet. So you can count up or down from 13 to find a letter that's nearer to the middle than to either end.)
I saw this on a shirt once, and was able to read it in my head:
01000100
01000001
01000100
I did it like this:
01000100 = 0100 0100 = 44 hex, - 40 hex = ucase letter 4 = D
01000001 = 0100 0001 = 41 hex, - 40 hex = ucase letter 1 = A
01000100 = 0100 0100 = 44 hex, - 40 hex = ucase letter 4 = D
The shirt said "DAD", which I thought was kinda cool, since it was being purchased by a pregnant woman. Her husband must be a geek like me.
How did I know right away that 92, b3, and 98 were not letters?
Because the ASCII code for a lowercase 'z' is 96 + 26 = 122, which in hex is 7a. 7a is the largest hex number for a letter. Anything larger than 7a is not a letter.
So that's how you can do it as a human.
How do computer programs do it?
For each set of 8 binary digits, convert it to a number, and look it up in an ASCII table.
(That's one pretty obvious and straight forward way. A typical programmer could probably think of 10 or 15 other ways in the space of a few minutes. The details depend on the computer language environment.)
Assuming that by "binary code" you mean just plain old data (sequences of bits, or bytes), and that by "letters" you mean characters, the answer is in two steps. But first, some background.
A character is just a named symbol, like "LATIN CAPITAL LETTER A" or "GREEK SMALL LETTER PI" or "BLACK CHESS KNIGHT". Do not confuse a character (abstract symbol) with a glyph (a picture of a character).
A character set is a particular set of characters, each of which is associated with a special number, called its codepoint. To see the codepoint mappings in the Unicode character set, see http://www.unicode.org/Public/UNIDATA/UnicodeData.txt.
Okay now here are the two steps:
The data, if it is textual, must be accompanied somehow by a character encoding, something like UTF-8, Latin-1, US-ASCII, etc. Each character encoding scheme specifies in great detail how byte sequences are interpreted as codepoints (and conversely how codepoints are encoded as byte sequences).
Once the byte sequences are interpreted as codepoints, you have your characters, because each character has a specific codepoint.
A couple notes:
In some encodings, certain byte sequences correspond to no codepoints at all, so you can have character decoding errors.
In some character sets, there are codepoints that are unused, that is, they correspond to no character at all.
In other words, not every byte sequence means something as text.
Do you mean the conversion 011001100110111101101111 → foo, for example? You just take the binary stream, split it into separate bytes (01100110, 01101111, 01101111) and look up the ASCII character that corresponds to given number. For example, 01100110 is 102 in decimal and the ASCII character with code 102 is f:
$ perl -E 'say 0b01100110'
102
$ perl -E 'say chr(102)'
f
(See what the chr function does.) You can generalize this algorithm and have a different number of bits per character and different encodings, the point remains the same.
To read binary ASCII characters with great speed using only your head:
Letters start with leading bits 01. Bit 3 is on (1) for lower case, off (0) for capitals. Scan the following bits 4–8 for the first that is on, and select the starting letter from the same index in this string: “PHDBA” (think P.H.D., Bachelors in Arts). E.g. 1xxxx = P, 01xxx = H, etc. Then convert the remaining bits to an integer value (e.g. 010 = 2), and count that many letters up from your starting letter. E.g. 01001010 => H+2 = J.
http://www.roubaixinteractive.com/PlayGround/Binary_Conversion/The_Characters.asp it just looks here... (not HERE but it has a table).
There are 8 bits in a byte. One byte can be one symbol. One bit is either on or off.
Why not just do this take 010010001001001 split it into two bits 8 letter each (01001000, 01001001). Then issue the powers
01001000. 01001001.
The first 8 ignore the first three they determine if it's capital or not, the go right to left doing powers of 2 (2^1, 2^2 2^3 2^4 2^5). So then add all the ones up , there's only one, and it = 8, and te eight letter in the alphabet is h so our first bit is the letter h, try it on the other bit